diff --git a/build.md b/build.md index 838f65e729..179601f129 100644 --- a/build.md +++ b/build.md @@ -13,7 +13,7 @@ already done and prepared to allow you to easily build it yourself. There are also friendly people and organizations who put together binary packages of curl and libcurl and make them available for download. The -different options will be explored below. +different options are explored below. ## The latest version? diff --git a/build/autotools.md b/build/autotools.md index 74abacffaf..5007daa5d9 100644 --- a/build/autotools.md +++ b/build/autotools.md @@ -28,8 +28,8 @@ with the previously used options adjusted. After configure has completed, you invoke `make` to build the entire thing and then finally `make install` to install curl, libcurl and associated things. `make install` requires that you have the correct rights in your -system to create and write files in the installation directory or you will get -some errors. +system to create and write files in the installation directory or you get an +error displayed. ## Cross-compiling @@ -47,8 +47,8 @@ result then can be moved over and used on the other machine. ## Static linking -By default, configure will setup the build files so that the following 'make' -command will create both shared and static versions of libcurl. You can change +By default, configure setups the build files so that the following 'make' +command creates both shared and static versions of libcurl. You can change that with the `--disable-static` or `--disable-shared` options to configure. If you instead want to build with static versions of third party libraries @@ -87,13 +87,12 @@ automatically check for OpenSSL, but modern versions do not. - Secure Transport: `--with-secure-transport` - wolfSSL: `--with-wolfssl` -If you do not specify which TLS library to use, the configure script will -fail. If you want to build *without* TLS support, you must explicitly ask for -that with `--without-ssl`. +If you do not specify which TLS library to use, the configure script fails. If +you want to build *without* TLS support, you must explicitly ask for that with +`--without-ssl`. These `--with-*` options also allow you to provide the install prefix so that -configure will search for the specific library where you tell it to. Like -this: +configure searches for the specific library where you tell it to. Like this: ./configure --with-gnutls=/home/user/custom-gnutls @@ -116,8 +115,7 @@ correct command-line options. - wolfSSH: `--with-wolfssh` These `--with-*` options also allow you to provide the install prefix so that -configure will search for the specific library where you tell it to. Like -this: +configure searches for the specific library where you tell it to. Like this: ./configure --with-libssh2=/home/user/custom-libssh2 diff --git a/build/deps.md b/build/deps.md index 208c5353c2..1f83f2d530 100644 --- a/build/deps.md +++ b/build/deps.md @@ -7,7 +7,7 @@ code. A whole slew of features that curl provides require that it is built to use one or more external libraries. They are then dependencies of curl. None of -them are *required*, but most users will want to use at least some of them. +them are *required*, but most users want to use at least some of them. ## HTTP Compression @@ -19,18 +19,18 @@ these libraries: - brotli compression with [brotli](https://github.com/google/brotli) - zstd compression with [libzstd](https://github.com/facebook/zstd) -Getting compressed data over the wire will use less bandwidth, which might -also result in shorter transfer times. +Getting compressed data over the wire uses less bandwidth, which might also +result in shorter transfer times. ## c-ares https://c-ares.org/ curl can be built with c-ares to be able to do asynchronous name resolution. -Another option to enable asynchronous name resolution is to build curl with the -threaded name resolver backend, which will then instead create a separate -helper thread for each name resolve. c-ares does it all within the -same thread. +Another option to enable asynchronous name resolution is to build curl with +the threaded name resolver backend, which then instead creates a separate +helper thread for each name resolve. c-ares does it all within the same +thread. ## nghttp2 @@ -57,8 +57,8 @@ librtmp library that comes from the RTMPDump project. https://rockdaboot.github.io/libpsl/ -When you build curl with support for libpsl, the cookie parser will know about -the Public Suffix List and thus handle such cookies appropriately. +When you build curl with support for libpsl, the cookie parser knows about the +Public Suffix List and thus handle such cookies appropriately. ## libidn2 diff --git a/build/separate.md b/build/separate.md index efa3509efd..0ee644b070 100644 --- a/build/separate.md +++ b/build/separate.md @@ -16,11 +16,11 @@ tends to confuse users. ## Static linking You can avoid the problem of curl finding an older dynamic libcurl library by -instead linking with libcurl statically. This will however instead trigger a -slew of other challenges because linking modern libraries with several third -party dependencies statically is hard work. When you link statically, you need -to make sure you provide all the dependencies to the linker. This is not a -method we recommend. +instead linking with libcurl statically. This however instead triggers a slew +of other challenges because linking modern libraries with several third party +dependencies statically is hard work. When you link statically, you need to +make sure you provide all the dependencies to the linker. This is not a method +we recommend. ## Dynamic linking @@ -74,7 +74,7 @@ application, you can make that load your custom libcurl build like this: With `rpath` set, the executable linked against `$HOME/install/lib/libcurl.so` then makes the runtime linker use that specific path and library, while other -binaries in your system will continue to use the system libcurl. +binaries in your system continue to use the system libcurl. When you want to make your custom build of `curl` use its own libcurl and you install them into `$HOME/install`, then a configure command line for this diff --git a/build/tls.md b/build/tls.md index fd579954a9..3b5f078c6e 100644 --- a/build/tls.md +++ b/build/tls.md @@ -25,32 +25,30 @@ machine. ## configure -Below, you will learn how to tell configure to use the different libraries. -The configure script will not select any TLS library by default. You must -select one, or instruct configure that you want to build without TLS support -using `--without-ssl`. +Below, you learn how to tell configure to use the different libraries. The +configure script does not select any TLS library by default. You must select +one, or instruct configure that you want to build without TLS support using +`--without-ssl`. ### OpenSSL, BoringSSL, libressl ./configure --with-openssl -configure will detect OpenSSL in its default path by default. You can -optionally point configure to a custom install path prefix where it can find -OpenSSL: +configure detects OpenSSL in its default path by default. You can optionally +point configure to a custom install path prefix where it can find OpenSSL: ./configure --with-openssl=/home/user/installed/openssl -The alternatives [BoringSSL](boringssl.md) and libressl look similar -enough that configure will detect them the same way as OpenSSL. It then uses -additional measures to figure out which of the particular flavors it is using. +The alternatives [BoringSSL](boringssl.md) and libressl look similar enough +that configure detects them the same way as OpenSSL. It then uses additional +measures to figure out which of the particular flavors it is using. ### GnuTLS ./configure --with-gnutls -configure will detect GnuTLS in its default path by default. You can -optionally point configure to a custom install path prefix where it can find -gnutls: +configure detects GnuTLS in its default path by default. You can optionally +point configure to a custom install path prefix where it can find gnutls: ./configure --with-gnutls=/home/user/installed/gnutls @@ -58,9 +56,8 @@ gnutls: ./configure --with-wolfssl -configure will detect WolfSSL in its default path by default. You can -optionally point configure to a custom install path prefix where it can find -WolfSSL: +configure detects WolfSSL in its default path by default. You can optionally +point configure to a custom install path prefix where it can find WolfSSL: ./configure --with-wolfssl=/home/user/installed/wolfssl @@ -68,9 +65,8 @@ WolfSSL: ./configure --with-mbedtls -configure will detect mbedTLS in its default path by default. You can -optionally point configure to a custom install path prefix where it can find -mbedTLS: +configure detects mbedTLS in its default path by default. You can optionally +point configure to a custom install path prefix where it can find mbedTLS: ./configure --with-mbedtls=/home/user/installed/mbedtls @@ -78,7 +74,7 @@ mbedTLS: ./configure --with-secure-transport -configure will detect Secure Transport in its default path by default. You can +configure detects Secure Transport in its default path by default. You can optionally point configure to a custom install path prefix where it can find Secure Transport: @@ -88,7 +84,7 @@ Secure Transport: ./configure --with-schannel -configure will detect Schannel in its default path by default. +configure detects Schannel in its default path by default. (WinSSL was previously an alternative name for Schannel, and earlier curl versions instead needed `--with-winssl`) @@ -97,9 +93,8 @@ versions instead needed `--with-winssl`) ./configure --with-bearssl -configure will detect BearSSL in its default path by default. You can -optionally point configure to a custom install path prefix where it can find -BearSSL: +configure detects BearSSL in its default path by default. You can optionally +point configure to a custom install path prefix where it can find BearSSL: ./configure --with-bearssl=/home/user/installed/bearssl @@ -108,7 +103,7 @@ BearSSL: ./configure --with-rustls When told to use "rustls", curl is actually trying to find and use the -rustls-ffi library - the C API for the rustls library. configure will detect +rustls-ffi library - the C API for the rustls library. configure detects rustls-ffi in its default path by default. You can optionally point configure to a custom install path prefix where it can find rustls-ffi: diff --git a/build/windows.md b/build/windows.md index a937fb8775..7dce54a2ba 100644 --- a/build/windows.md +++ b/build/windows.md @@ -29,13 +29,13 @@ build curl with. Project files are provided for several different Visual C++ versions. -To build with VC++, you will of course have to first install VC++ which is -part of Visual Studio. +To build with VC++, you need to first install VC++ which is part of Visual +Studio. Once you have VC++ installed you should launch the application and open one of the solution or workspace files. The VC directory names are based on the -version of Visual C++ that you will be using. Each version of Visual Studio -has a default version of Visual C++. We offer these versions: +version of Visual C++ that you use. Each version of Visual Studio has a +default version of Visual C++. We offer these versions: - VC10 (Visual Studio 2010 Version 10.0) - VC11 (Visual Studio 2012 Version 11.0) @@ -56,7 +56,7 @@ use `VC14.30\curl-all.sln` to build curl and libcurl. If you are a developer and plan to run the curl tool from Visual Studio (eg you are debugging) with any third-party libraries (such as OpenSSL, wolfSSL or -libSSH2) then you will need to add the search path of these DLLs to the +libSSH2) then you need to add the search path of these DLLs to the configuration's PATH environment. To do that: 1. Open the 'curl-all.sln' or 'curl.sln' solutions @@ -88,8 +88,8 @@ DLL Debug - DLL wolfSSL (x64): C:\Windows;C:\Windows\System32\Wbem If you are using a configuration that uses multiple third-party library DLLs -(such as `DLL Debug - DLL OpenSSL - DLL LibSSH2`) then `Path to DLL` will need -to contain the path to both of these. +(such as `DLL Debug - DLL OpenSSL - DLL LibSSH2`) then `Path to DLL` need to +contain the path to both of these. ## Notes @@ -102,12 +102,12 @@ The following keywords have been used in the directory hierarchy: Release - LIB OpenSSL) If you are using the source code from the git repository, rather than a -release archive or nightly build, you will need to generate the project +release archive or nightly build, you need to generate the project files. Please run "generate -help" for usage details. Should you wish to help out with some of the items on the TODO list, or find bugs in the project files that need correcting, and would like to submit updated files back then please note that, whilst the solution files can be edited directly, the templates for the project files (which are stored in the -git repository) will need to be modified rather than the generated project -files that Visual Studio uses. +git repository) need to be modified rather than the generated project files +that Visual Studio uses. diff --git a/cmdline.md b/cmdline.md index 21e1a3bf73..ffd871f8e9 100644 --- a/cmdline.md +++ b/cmdline.md @@ -6,9 +6,9 @@ prompts and from within scripts by countless users over the years. ## Garbage in gives garbage out curl has little will of its own. It tries to please you and your wishes to a -large extent. It also means that it will try to play with what you give it. If +large extent. It also means that it tries to play with what you give it. If you misspell an option, it might do something unintended. If you pass in a -slightly illegal URL, chances are curl will still deal with it and proceed. It +slightly illegal URL, chances are curl still deals with it and proceeds. It means that you can pass in crazy data in some options and you can have curl pass on that crazy data in its transfer operation. diff --git a/cmdline/configfile.md b/cmdline/configfile.md index d8fc18a9e6..3a81bf1e99 100644 --- a/cmdline/configfile.md +++ b/cmdline/configfile.md @@ -101,7 +101,8 @@ You'll need to use double quotes when: * the parameter contains white space, or starts with the characters `:` or `=`. * you need to use escape sequences (available options: `\\`, `\"`, `\t`, `\n`, `\r` and `\v`. A backslash preceding any other letter is ignored). -If a parameter containing white space is not enclosed in double quotes, curl will treat the next space or newline as the end of the argument. +If a parameter containing white space is not enclosed in double quotes, curl +considers the next space or newline as the end of the argument. ## Default config file @@ -129,4 +130,4 @@ it checks for one in the same dir the curl executable is placed. On Windows two filenames are checked per location: `.curlrc` and `_curlrc`, preferring the former. Ancient curl versions on Windows checked for `_curlrc` -only. \ No newline at end of file +only. diff --git a/cmdline/differences.md b/cmdline/differences.md index 909f16553c..7d8980da2a 100644 --- a/cmdline/differences.md +++ b/cmdline/differences.md @@ -33,8 +33,7 @@ precedence when a command line is executed. In order to use curl properly with PowerShell, you need to type in its full name including the extension: "curl.exe". -Different command-line environments will also have different maximum command -line lengths and force the users to limit how large amount of data that can be -put into a single line. curl adapts to this by offering a way to provide -command-line options through a file or stdin using the -[-K option](configfile.md). +Different command-line environments have different maximum command line +lengths and force the users to limit how large amount of data that can be put +into a single line. curl adapts to this by offering a way to provide +command-line options through a file or stdin using the [-K option](configfile.md). diff --git a/cmdline/globbing.md b/cmdline/globbing.md index 630df95a28..b5045ddde4 100644 --- a/cmdline/globbing.md +++ b/cmdline/globbing.md @@ -53,8 +53,8 @@ of the brackets used for the ranges: ## Combinations -You can use several globs in the same URL which then will make curl iterate -over those, too. To download the images of Ben, Alice and Frank, in both the +You can use several globs in the same URL which then makes curl iterate over +those, too. To download the images of Ben, Alice and Frank, in both the resolutions 100 x 100 and 1000 x 1000, a command line could look like: curl -O "http://example.com/{Ben,Alice,Frank}-{100x100,1000x1000}.jpg" diff --git a/cmdline/listopts.md b/cmdline/listopts.md index 7ed43e5050..44c62951f2 100644 --- a/cmdline/listopts.md +++ b/cmdline/listopts.md @@ -1,13 +1,13 @@ # List options curl has more than two hundred and fifty command-line options and the number -of options keep increasing over time. Chances are the number of options will -reach three hundred in the coming years. +of options keep increasing over time. Chances are the number of options +reaches or even surpasses three hundred in the coming years. To find out which options you need to perform as certain action, you can get -curl to list them. First, `curl --help` or simply `curl -h` will get you a -list of the most important and frequently used options. You can then provide -an additional "category" to `-h` to get more options listed for that specific +curl to list them. First, `curl --help` or simply `curl -h` get you a list of +the most important and frequently used options. You can then provide an +additional "category" to `-h` to get more options listed for that specific area. Use `curl -h category` to list all existing categories or `curl -h all` to list *all* available options. @@ -15,5 +15,5 @@ The `curl --manual` option outputs the entire man page for curl. That is a thorough and complete document on how each option works amassing several thousand lines of documentation. To wade through that is also a tedious work and we encourage use of a search function through those text masses. Some -people will appreciate the man page in its [web -version](https://curl.se/docs/manpage.html). +people might also appreciate the man page in its +[web version](https://curl.se/docs/manpage.html). diff --git a/cmdline/options.md b/cmdline/options.md index b634c1cbae..2dea0acfa9 100644 --- a/cmdline/options.md +++ b/cmdline/options.md @@ -76,7 +76,7 @@ contains one or more spaces. For example you want to set the user-agent field curl uses to be exactly `I am your father`, including those three spaces. Then you need to put quotes around the string when you pass it to curl on the command line. The exact quotes to use varies depending on your shell/command -prompt, but generally it will work with double quotes in most places: +prompt, but generally it works with double quotes in most places: curl -A "I am your father" http://example.com @@ -84,9 +84,9 @@ Failing to use quotes, like if you would write the command line like this: curl -A I am your father http://example.com -… will make curl only use 'I' as a user-agent string, and the following -strings, 'am', your, etc will instead all be treated as separate URLs since -they do not start with `-` to indicate that they are options and curl only ever +… makes curl only use 'I' as a user-agent string, and the following strings, +`am`, `your` and `father` are instead treated as separate URLs since they do +not start with `-` to indicate that they are options and curl only ever handles options and URLs. To make the string itself contain double quotes, which is common when you for diff --git a/cmdline/passwords.md b/cmdline/passwords.md index c01aa67812..a15b8aa917 100644 --- a/cmdline/passwords.md +++ b/cmdline/passwords.md @@ -17,14 +17,13 @@ is `12345`: Several potentially bad things are going on here. First, we are entering a password on the command line and the command line might be readable for other -users on the same system (assuming you have a multi-user system). curl -will help minimize that risk by trying to blank out passwords from process -listings. +users on the same system (assuming you have a multi-user system). curl helps +minimize that risk by trying to blank out passwords from process listings. One way to avoid passing the user name and password on the command line is to instead use a [.netrc file](../usingcurl/netrc.md) or a [config file](configfile.md). You can also use the `-u` option without specifying the password, and then -curl will instead prompt the user for it when it runs. +curl instead prompts the user for it when it runs. ## Network leakage diff --git a/cmdline/progressmeter.md b/cmdline/progressmeter.md index 0b7650fe24..2a6e11fc54 100644 --- a/cmdline/progressmeter.md +++ b/cmdline/progressmeter.md @@ -27,8 +27,8 @@ transfer times, etc. The progress meter displays bytes and bytes per second. -It will also use suffixes for larger amounts of bytes, using the 1024 base -system so 1024 is one kilobyte (1K), 2048 is 2K, etc. curl supports these: +It also uses suffixes for larger amounts of bytes, using the 1024 base system +so 1024 is one kilobyte (1K), 2048 is 2K, etc. curl supports these: | Suffix | Amount | Name | |---------|---------|-----------| diff --git a/cmdline/urls/options.md b/cmdline/urls/options.md index b178667f16..02ef120f2c 100644 --- a/cmdline/urls/options.md +++ b/cmdline/urls/options.md @@ -5,7 +5,7 @@ supports an unlimited number of URLs. If your shell or command-line system supports it, there is really no limit to how long a command line you can pass to curl. -curl will parse the entire command line first, apply the wishes from the +curl parses the entire command line first, apply the wishes from the command-line options used, and then go over the URLs one by one (in a left to right order) to perform the operations. @@ -13,7 +13,7 @@ For some options (for example `-o` or `-O` that tell curl where to store the transfer), you may want to specify one option for each URL on the command line. -curl will return an exit code for its operation on the last URL used. If you +curl returns an exit code for its operation on the last URL used. If you instead rather want curl to exit with an error on the first URL in the set that fails, use the `--fail-early` option. @@ -25,10 +25,10 @@ of them. The `-o` and `-O` options instruct curl how to save the output for you have URLs on the command line. If you have more URLs than output options on the command line, the URL content -without corresponding output instructions will then instead be sent to stdout. +without a corresponding output instruction then instead gets sent to stdout. -Using the `--remote-name-all` flag will automatically make curl act as if `-O` -was used for all given URLs that do not have any output option. +Using the `--remote-name-all` flag automatically makes curl act as if `-O` was +used for all given URLs that do not have any output option. ## Separate options per URL @@ -36,10 +36,10 @@ In previous sections we described how curl always parses all options in the whole command line and applies those to all the URLs that it transfers. That was a simplification: curl also offers an option (`-:`, `--next`) that -inserts a boundary between a set of options and URLs for which it will apply -the options. When the command-line parser finds a `--next` option, it applies -the following options to the next set of URLs. The `--next` option thus works -as a *divider* between a set of options and URLs. You can use as many `--next` +inserts a boundary between a set of options and URLs for which it applies the +options. When the command-line parser finds a `--next` option, it applies the +following options to the next set of URLs. The `--next` option thus works as a +*divider* between a set of options and URLs. You can use as many `--next` options as you please. As an example, we do an HTTP GET to a URL and follow redirects, we then make a diff --git a/cmdline/urls/parallel.md b/cmdline/urls/parallel.md index 9e2b37d84c..50aeaba887 100644 --- a/cmdline/urls/parallel.md +++ b/cmdline/urls/parallel.md @@ -6,9 +6,9 @@ can be slow. curl offers the `-Z` (or `--parallel`) option that instead instructs curl to attempt to do the specified transfers in a parallel fashion. When this is -enabled, curl will do a lot of transfers simultaneously instead of -serially. It will do up to 50 transfers at the same time by default and as -soon as one of them has completed, the next one will be kicked off. +enabled, curl performs a lot of transfers simultaneously instead of +serially. It does up to 50 transfers at the same time by default and as soon +as one of them completes, the next one is kicked off. For cases where you want to download many files from different sources and a few of them might be slow, a few fast, this can speed things up tremendously. @@ -20,7 +20,7 @@ to allow you to change that amount. Naturally, the ordinary progress meter display that shows file transfer progress for a single transfer is not that useful for parallel transfers so -when curl performs parallel transfers, it will show a different progress meter +when curl performs parallel transfers, it shows a different progress meter that displays information about all the current ongoing transfers in a single line. diff --git a/cmdline/urls/path.md b/cmdline/urls/path.md index f439797253..4683e1d908 100644 --- a/cmdline/urls/path.md +++ b/cmdline/urls/path.md @@ -6,7 +6,7 @@ when you use just the host name like in: curl https://example.com The path is sent to the specified server to identify exactly which resource -that is requested or that will be provided. +that is requested or that is provided. The exact use of the path is protocol dependent. For example, getting the file `README` from the default anonymous user from an FTP server: diff --git a/cmdline/urls/port.md b/cmdline/urls/port.md index 80cbc064b7..a7ee3e63b0 100644 --- a/cmdline/urls/port.md +++ b/cmdline/urls/port.md @@ -1,9 +1,9 @@ # Port number -Each protocol has a "default port" that curl will use for it, unless a -specified port number is given. The optional port number can be provided -within the URL after the host name part, as a colon and the port number -written in decimal. For example, asking for an HTTP document on port 8080: +Each protocol has a default port number that curl uses, unless a specified +port number is given. The optional port number can be provided within the URL +after the host name part, as a colon and the port number written in +decimal. For example, asking for an HTTP document on port 8080: curl http://example.com:8080/ diff --git a/cmdline/urls/query.md b/cmdline/urls/query.md index 4cf3991075..8650859ef9 100644 --- a/cmdline/urls/query.md +++ b/cmdline/urls/query.md @@ -18,25 +18,22 @@ When adding query parts, curl adds ampersand separators. The syntax is identical to that used `--data-urlencode` with one extension: the `+` prefix. See below. - - `content`: This will make curl URL encode the content and add that to the - query. Just be careful so that the content does not contain any `=` or `@` - symbols, as that will then make the syntax match one of the other cases - below! - - - `=content`: This will make curl URL encode the content and add that to the - query. The initial `=` symbol is not included in the data. - - - `name=content`: This will make curl URL encode the content part and add - that to the query. Note that the name part is expected to be URL encoded - already. - - - `@filename`: This will make curl load data from the given file (including - any newlines), URL encode that data and that to the query. - - - `name@filename`: This will make curl load data from the given file - (including any newlines), URL encode that data and add it to the query. - The name part gets an equal sign appended, resulting in - `name=urlencoded-file-content`. Note that the name is expected to be URL - encoded already. + - `content`: URL encode the content and add that to the query. Just be + careful so that the content does not contain any `=` or `@` symbols, as + that makes the syntax match one of the other cases below! + + - `=content`: URL encode the content and add that to the query. The initial + `=` symbol is not included in the data. + + - `name=content`: URL encode the content part and add that to the query. Note + that the name part is expected to be URL encoded already. + + - `@filename`: load data from the given file (including any newlines), URL + encode that data and that to the query. + + - `name@filename`: load data from the given file (including any newlines), + URL encode that data and add it to the query. The name part gets an equal + sign appended, resulting in `name=urlencoded-file-content`. Note that the + name is expected to be URL encoded already. - `+content`: Add the content to the query without doing any encoding. diff --git a/cmdline/urls/scheme.md b/cmdline/urls/scheme.md index f295e281b6..6e810bc620 100644 --- a/cmdline/urls/scheme.md +++ b/cmdline/urls/scheme.md @@ -2,9 +2,8 @@ URLs start with the "scheme", which is the official name for the `http://` part. That tells which protocol the URL uses. The scheme must be a known one -that this version of curl supports or it will show an error message and -stop. Additionally, the scheme must neither start with nor contain any -whitespace. +that this version of curl supports or it shows an error message and stops. +Additionally, the scheme must neither start with nor contain any whitespace. ## The scheme separator @@ -13,11 +12,11 @@ sequence. That is a colon and two forward slashes. There exists URL formats with only one slash, but curl does not support any of them. There are two additional notes to be aware of, about the number of slashes: -curl allows some illegal syntax and tries to correct it internally; so it will -also understand and accept URLs with one or three slashes, even though they -are in fact not properly formed URLs. curl does this because the browsers -started this practice so it has lead to such URLs being used in the wild every -now and then. +curl allows some illegal syntax and tries to correct it internally; so it also +understands and accepts URLs with one or three slashes, even though they are +in fact not properly formed URLs. curl does this because the browsers started +this practice so it has lead to such URLs being used in the wild every now and +then. `file://` URLs are written as `file:///` but the only hostnames that are okay to use are `localhost`, `127.0.0.1` or a blank @@ -27,8 +26,8 @@ hostnames that are okay to use are `localhost`, `127.0.0.1` or a blank file://127.0.0.1/path/to/file file:///path/to/file -Inserting any other host name in there will make recent versions of curl to -return an error. +Inserting any other host name in there makes recent versions of curl return an +error. Pay special attention to the third example above (`file:///path/to/file`). That is *three* slashes before the path. That is @@ -48,8 +47,8 @@ host name. That guessing is basic, as it just checks if the first part of the host name matches one of a set of protocols, and assumes you meant to use that protocol. This heuristic is based on the fact that servers traditionally used to be named like that. The protocols that are detected this way are FTP, DICT, -LDAP, IMAP, SMTP and POP3. Any other host name in a scheme-less URL will make -curl default to HTTP. +LDAP, IMAP, SMTP and POP3. Any other host name in a scheme-less URL makes curl +default to HTTP. For example, this gets a file from an FTP site: diff --git a/cmdline/variables.md b/cmdline/variables.md index d4b9051245..7eef197073 100644 --- a/cmdline/variables.md +++ b/cmdline/variables.md @@ -9,9 +9,9 @@ stdin if set to a single dash (`-`). A variable in this context is given a specific name and it holds contents. Any number of variables can be set. If you set the same variable name again, it -will be overwritten with new content. Variable names are case sensitive, can -be up to 128 characters long and may consist of the characters a-z, A-Z, 0-9 -and underscore. +gets overwritten with new content. Variable names are case sensitive, can be +up to 128 characters long and may consist of the characters a-z, A-Z, 0-9 and +underscore. Some examples below contain multiple lines for readability. The forward slash (`\`) is used to instruct the terminal to ignore the newline. @@ -51,7 +51,8 @@ Variables can be expanded in option parameters using `{{varName}}` when the option name is prefixed with `--expand-`. This makes the content of the variable `varName` get inserted. -If you reference a name that does not exist as a variable, a blank string will be inserted. +If you reference a name that does not exist as a variable, a blank string is +inserted. Insert `{{` verbatim in the string by escaping it with a backslash: @@ -65,11 +66,11 @@ In the example below, the variable `host` is set and then expanded: --variable host=example \ --expand-url "https://{{host}}.com" \ -For options specified without the `--expand-` prefix, variables will not be +For options specified without the `--expand-` prefix, variables are not expanded. -Variable content holding null bytes that are not encoded when expanded will -cause curl to exit with an error. +Variable content holding null bytes that are not encoded when expanded causes +curl to exit with an error. ## Environment variables @@ -80,7 +81,9 @@ using `=content` or `@file` as described above. ### Example 1: No default value set -Assign the `%USER` environment variable to a curl variable and insert it into a URL. Because no default value is specified, this operation will fail if the environment variable does not exist: +Assign the `%USER` environment variable to a curl variable and insert it into +a URL. Because no default value is specified, this operation fails if the +environment variable does not exist: curl \ --variable %USER \ @@ -179,4 +182,4 @@ Example: get the contents of a file called `$HOME/.secret` into a variable calle --variable %HOME=/home/default \ --expand-variable fix@{{HOME}}/.secret \ --expand-data "{{fix:trim:url}}" \ - --url https://example.com/ \ \ No newline at end of file + --url https://example.com/ \ diff --git a/ftp.md b/ftp.md index 58cc1794d9..ae123ddc11 100644 --- a/ftp.md +++ b/ftp.md @@ -5,33 +5,33 @@ curl supports—it was created in the early 1970s. The official spec that still is the go-to documentation is [RFC 959](https://www.ietf.org/rfc/rfc959.txt), from 1985, published well over a decade before the first curl release. -FTP was created in a different era of the Internet and computers and as such it -works a little bit differently than most other protocols. These differences -can often be ignored and things will just work, but they are also -important to know at times when things do not run as planned. +FTP was created in a different era of the Internet and computers and as such +it works a little bit differently than most other protocols. These differences +can often be ignored and things just works, but they are also important to +know at times when things do not run as planned. ## Ping-pong The FTP protocol is a command and response protocol; the client sends a -command and the server responds. If you use curl's `-v` option you will get to -see all the commands and responses during a transfer. +command and the server responds. If you use curl's `-v` option you get to see +all the commands and responses during a transfer. For an ordinary transfer, there are something like 5 to 8 commands necessary to send and as many responses to wait for and read. Perhaps needlessly to say, -if the server is in a remote location there will be a lot of time waiting for -the ping pong to go through before the actual file transfer can be set up and -get started. For small files, the initial commands can take longer time than -the actual data transfer. +if the server is in a remote location there is a lot of time waiting for the +ping pong to go through before the actual file transfer can be set up and get +started. For small files, the initial commands can take longer time than the +actual data transfer. ## Transfer mode When an FTP client is about to transfer data, it specifies to the server which "transfer mode" it would like the upcoming transfer to use. The two transfer modes curl supports are 'ASCII' and 'BINARY'. Ascii is for text and usually -means that the server will send the files with converted newlines while binary +means that the server sends the files with converted newlines while binary means sending the data unaltered and assuming the file is not text. -curl will default to binary transfer mode for FTP, and you ask for ascii mode +curl defaults to binary transfer mode for FTP, and you ask for ascii mode instead with `-B, --use-ascii` or by making sure the URL ends with `;type=A`. ## Authentication diff --git a/ftp/cmds.md b/ftp/cmds.md index 788d2517a7..cd23b4f865 100644 --- a/ftp/cmds.md +++ b/ftp/cmds.md @@ -41,8 +41,8 @@ You can in fact send commands in all three different times by using multiple position by using more `-Q` options. By default, if any of the given commands returns an error from the server, -curl will stop its operations, abort the transfer (if it happens before -transfer has started) and not send any more of the custom commands. +curl stops its operations, abort the transfer (if it happens before transfer +has started) and not send any more of the custom commands. Example, rename a file then do a transfer: diff --git a/ftp/dirlist.md b/ftp/dirlist.md index 4e0cedf83e..043cb55dd8 100644 --- a/ftp/dirlist.md +++ b/ftp/dirlist.md @@ -1,16 +1,16 @@ # FTP Directory listing You can list a remote FTP directory with curl by making sure the URL ends with -a trailing slash. If the URL ends with a slash, curl will presume that it is a -directory you want to list. If it is not actually a directory, you will most -likely instead get an error. +a trailing slash. If the URL ends with a slash, curl presumes that it is a +directory you want to list. If it is not actually a directory, you are likely +to instead get an error. curl ftp://ftp.example.com/directory/ With FTP there is no standard syntax for the directory output that is returned for this sort of command that uses the standard FTP command `LIST`. The -listing is usually humanly readable and perfectly understandable but you will -see that different servers will return the listing in slightly different ways. +listing is usually humanly readable and perfectly understandable but different +servers can return the listing using slightly different layouts. One way to get just a listing of all the names in a directory and thus to avoid the special formatting of the regular directory listings is to tell curl to diff --git a/ftp/ftps.md b/ftp/ftps.md index 89e149cdd3..9d9eed540f 100644 --- a/ftp/ftps.md +++ b/ftp/ftps.md @@ -11,8 +11,8 @@ with dictates which of these methods you can and shall use against it. The *implicit* way is when you use `ftps://` in your URL. This makes curl connect to the host and do a TLS handshake immediately, without doing anything -in the clear. If no port number is specified in the URL, curl will use port -990 for this. This is usually not how FTPS is done. +in the clear. If no port number is specified in the URL, curl uses port 990 +for this. This is usually not how FTPS is done. ## Explicit FTPS diff --git a/ftp/upload.md b/ftp/upload.md index 77894ff2ed..eb58c72638 100644 --- a/ftp/upload.md +++ b/ftp/upload.md @@ -3,8 +3,8 @@ To upload to an FTP server, you specify the entire target file path and name in the URL, and you specify the local filename to upload with `-T, --upload-file`. Optionally, you end the target URL with a slash and then the -file component from the local path will be appended by curl and used as the -remote filename. +file component from the local path is appended by curl and used as the remote +filename. Like: diff --git a/get.md b/get.md index 202c16a82c..68381434f2 100644 --- a/get.md +++ b/get.md @@ -1,8 +1,8 @@ # Install curl curl is totally free, open and available. There are numerous ways to get it -and install it for most operating systems and architecture. This section will -give you some answers to start with, but will not be a complete reference. +and install it for most operating systems and architecture. This section gives +you some answers to start with, but is not a complete reference. Some operating systems ship curl by default. Some do not. diff --git a/get/win-msys2.md b/get/win-msys2.md index 3f704cc78e..bd0083163f 100644 --- a/get/win-msys2.md +++ b/get/win-msys2.md @@ -54,7 +54,10 @@ This directory contains the PKGBUILD file and patches that are used for building makepkg-mingw --syncdeps --skippgpcheck ``` -That is it. The `--syncdeps` parameter will automatically check and prompt to install dependencies of `mingw-w64-curl` if these are not yet installed. Once the process is complete you will have 3 new files in the current directory, for example: +That is it. The `--syncdeps` parameter automatically checks and prompts to +install dependencies of `mingw-w64-curl` if these are not yet installed. Once +the process is complete you have 3 new files in the current directory, for +example: * `pacman -U mingw-w64-x86_64-curl-7.80.0-1-any.pkg.tar.zst` * `pacman -U mingw-w64-x86_64-curl-gnutls-7.80.0-1-any.pkg.tar.zst` diff --git a/http.md b/http.md index 9a8d0b352f..878ab28514 100644 --- a/http.md +++ b/http.md @@ -2,10 +2,9 @@ In all user surveys and during all curl's lifetime, HTTP has been the most important and most frequently used protocol that curl supports. This chapter -will explain how to do effective HTTP transfers and general fiddling with -curl. +explains how to do effective HTTP transfers and general fiddling with curl. -This will mostly work the same way for HTTPS, as they are really the same thing +This mostly works the same way for HTTPS, as they are really the same thing under the hood, as HTTPS is HTTP with an extra security TLS layer. See also the specific [HTTPS](#https) section below. diff --git a/http/altsvc.md b/http/altsvc.md index c56ce448ba..1d9d8381d3 100644 --- a/http/altsvc.md +++ b/http/altsvc.md @@ -15,9 +15,9 @@ alt-svc cache file like this: curl --alt-svc altcache.txt https://example.com/ -then curl will load existing alternative service entries from the file at -start-up and consider those when doing HTTP requests, and if the servers sends -new or updated `Alt-Svc:` headers, curl will store those in the cache at exit. +then curl loads existing alternative service entries from the file at start-up +and consider those when doing HTTP requests, and if the servers sends new or +updated `Alt-Svc:` headers, curl stores those in the cache at exit. ## The alt-svc cache diff --git a/http/auth.md b/http/auth.md index 3e4fb19895..b85e32f1d3 100644 --- a/http/auth.md +++ b/http/auth.md @@ -15,25 +15,25 @@ an associated `Proxy-Authenticate:` header that lists all the authentication methods that the proxy supports. It might be worth to note that most websites of today do not require HTTP -authentication for login etc, but they will instead ask users to login on web -pages and then the browser will issue a POST with the user and password etc, -and then subsequently maintain cookies for the session. +authentication for login etc, but they instead ask users to login on web pages +and then the browser issues a POST with the user and password etc, and then +subsequently maintain cookies for the session. To tell curl to do an authenticated HTTP request, you use the `-u, --user` option to provide user name and password (separated with a colon). Like this: curl --user daniel:secret http://example.com/ -This will make curl use the default "Basic" HTTP authentication method. Yes, -it is actually called Basic and it is truly basic. To explicitly ask for the -basic method, use `--basic`. +This makes curl use the default "Basic" HTTP authentication method. Yes, it is +actually called Basic and it is truly basic. To explicitly ask for the basic +method, use `--basic`. The Basic authentication method sends the user name and password in clear text over the network (base64 encoded) and should be avoided for HTTP transport. When asking to do an HTTP transfer using a single (specified or implied), -authentication method, curl will insert the authentication header already in -the first request on the wire. +authentication method, curl inserts the authentication header already in the +first request on the wire. If you would rather have curl first *test* if the authentication is really required, you can ask curl to figure that out and then automatically use the diff --git a/http/basics.md b/http/basics.md index 4db3e1d5d4..160fadcfaf 100644 --- a/http/basics.md +++ b/http/basics.md @@ -52,11 +52,11 @@ request to the server. Let's take an example URL: https://www.example.com/path/to/file - - **https** means that curl will use TLS to the remote port 443 (which is the + - **https** means that curl uses TLS to the remote port 443 (which is the default port number when no specified is used in the URL). - - **www.example.com** is the host name that curl will resolve to one or more IP - address to connect to. This host name will also be used in the HTTP request in + - **www.example.com** is the host name that curl resolves to one or more IP + address to connect to. This host name is also used in the HTTP request in the `Host:` header. - **/path/to/file** is used in the HTTP request to tell the server which exact diff --git a/http/browserlike.md b/http/browserlike.md index 25b7172bb1..3ec41dd663 100644 --- a/http/browserlike.md +++ b/http/browserlike.md @@ -2,7 +2,7 @@ curl can do almost every HTTP operation and transfer your favorite browser can. It can actually do a lot more than so as well, but in this chapter we -will focus on the fact that you can use curl to reproduce, or script, what you +focus on the fact that you can use curl to reproduce, or script, what you would otherwise have to do manually with a browser. Here are some tricks and advice on how to proceed when doing this. @@ -60,7 +60,7 @@ In our imaginary case, the form tag looks like this: There are three fields of importance. **text**, **secret** and **id**. The -last one, the id, is marked `hidden` which means that it will not show up in +last one, the id, is marked `hidden` which means that it does not show up in the browser and it is not a field that a user fills in. It is generated by the site itself, and for your curl login to succeed, you need extract that value and use that in your POST submission together with the rest of the data. @@ -116,11 +116,11 @@ and update the cookies. ## Referer -Some sites will check that the `Referer:` is actually identifying the -legitimate parent URL when you request something or when you login or -similar. You can then inform the server from which URL you arrived by using -`-e https://example.com/` etc. Appending that to the previous login attempt -then makes it: +Some sites verify that the `Referer:` is actually identifying the legitimate +parent URL when you request something or when you login or similar. You can +then inform the server from which URL you arrived by using `-e +https://example.com/` etc. Appending that to the previous login attempt then +makes it: curl -d user=daniel -d secret=qwerty -d id=bc76 https://example.com/login.cgi \ -b cookies -c cookies -L -e "https://example.com/" -o out diff --git a/http/cookies.md b/http/cookies.md index a93a0bfbcf..aee750ff33 100644 --- a/http/cookies.md +++ b/http/cookies.md @@ -26,17 +26,17 @@ differently makes it not acknowledge cookies by default. You need to switch on subsequently send them out on requests that have matching cookies. You enable the cookie engine by asking curl to read or write cookies. If you -tell curl to read cookies from a non-existing file, you will only switch on -the engine but start off with an empty internal cookie store: +tell curl to read cookies from blank named file, you only switch on the engine +but start off with an empty internal cookie store: - curl -b non-existing http://example.com + curl -b "" http://example.com Just switching on the cookie engine, getting a single resource and then quitting would be pointless as curl would have no chance to actually send any cookies it received. Assuming the site in this example would set cookies and then do a redirect we would do: - curl -L -b non-existing http://example.com + curl -L -b "" http://example.com ## Reading cookies from file @@ -74,17 +74,17 @@ You point out the cookie jar output with `-c`: `-c` is the instruction to *write* cookies to a file, `-b` is the instruction to *read* cookies from a file. Oftentimes you want both. -When curl writes cookies to this file, it will save all known cookies -including those that are session cookies (without a given lifetime). curl -itself has no notion of a session and it does not know when a session ends so -it will not flush session cookies unless you tell it to. +When curl writes cookies to this file, it saves all known cookies including +those that are session cookies (without a given lifetime). curl itself has no +notion of a session and it does not know when a session ends so it does not +flush session cookies unless you tell it to. ## New cookie session Instead of telling curl when a session ends, curl features an option that lets the user decide when a new session *begins*. -A new cookie session means that all the old session cookies will be thrown +A new cookie session means that all the old session cookies are be thrown away. It is the equivalent of closing a browser and starting it up again. Tell curl a new cookie session starts by using `-j, --junk-session-cookies`: diff --git a/http/cookies/fileformat.md b/http/cookies/fileformat.md index 655944ac16..84dd0b2f43 100644 --- a/http/cookies/fileformat.md +++ b/http/cookies/fileformat.md @@ -10,7 +10,7 @@ file with a bunch of associated meta data, each field separated with TAB. That file is called the cookiejar in curl terminology. When libcurl saves a cookiejar, it creates a file header of its own in which -there is a URL mention that will link to the web version of this document. +there is a URL mention that links to the web version of this document. ## File format diff --git a/http/hsts.md b/http/hsts.md index f452d91df7..c1a0355ae8 100644 --- a/http/hsts.md +++ b/http/hsts.md @@ -17,13 +17,13 @@ Invoke curl and tell it which file to use as a hsts cache: curl --hsts hsts.txt https://example.com -curl will only update the hsts info if the header is read over a secure -transfer, so not when done over a clear text protocol. +curl only updates the hsts info if the header is read over a secure transfer, +so not when done over a clear text protocol. ## Use HSTS to update insecure protocols -If the cache file now contains an entry for the given host name, it will -automatically switch over to a secure protocol even if you try to connect to +If the cache file now contains an entry for the given host name, it +automatically switches over to a secure protocol even if you try to connect to it with an insecure one: curl --hsts hsts.txt http://example.com diff --git a/http/modify/fragment.md b/http/modify/fragment.md index fc57057c28..2592a542db 100644 --- a/http/modify/fragment.md +++ b/http/modify/fragment.md @@ -4,5 +4,5 @@ A URL may contain an "anchor", also known as a fragment, which is written with a pound sign and string at the end of the URL. Like for example `http://example.com/foo.html#here-it-is`. That fragment part, everything from the pound/hash sign to the end of the URL, is only intended for local use and -will not be sent over the network. curl will simply strip that data off and -discard it. +is not sent over the network. curl simply strips that data off and discards +it. diff --git a/http/modify/headers.md b/http/modify/headers.md index 6ffbba50c6..3b6083b600 100644 --- a/http/modify/headers.md +++ b/http/modify/headers.md @@ -1,19 +1,18 @@ # Customize headers -In an HTTP request, after the initial request-line, there will typically -follow a number of request headers. That is a set of `name: value` pairs that -ends with a blank line that separates the headers from the following request -body (that sometimes is empty). - -curl will by default and on its own account pass a few headers in requests, -like for example `Host:`, `Accept:`, `User-Agent:` and a few others that may -depend on what the user asks curl to do. - -All headers set by curl itself can be overridden, replaced if you will, by the -user. You just then tell curl's `-H` or `--header` the new header to use and -it will then replace the internal one if the header field matches one of those -headers, or it will add the specified header to the list of headers to send in -the request. +In an HTTP request, after the initial request-line, there typically follows a +number of request headers. That is a set of `name: value` pairs that ends with +a blank line that separates the headers from the following request body (that +sometimes is empty). + +curl passes on a few default headers by default on its own account in +requests, like for example `Host:`, `Accept:`, `User-Agent:` and a few others +that may depend on what the user asks curl to do. + +All headers set by curl itself can be replaced, by the user. You just then +tell curl's `-H` or `--header` the new header to use and it then replaces the +internal one if the header field matches one of those headers, or it adds the +specified header to the list of headers to send in the request. To change the `Host:` header, do this: diff --git a/http/modify/method.md b/http/modify/method.md index a0be3b7556..4a2b3a9e08 100644 --- a/http/modify/method.md +++ b/http/modify/method.md @@ -21,12 +21,12 @@ does not change any behavior. This is particularly important if you, for example, ask curl to send a HEAD with `-X`, as HEAD is specified to send all the headers a GET response would get but *never* send a response body, even if the headers otherwise imply that one would come. So, adding `-X HEAD` to a -command line that would otherwise do a GET will cause curl to hang, waiting -for a response body that will not come. - -When asking curl to perform HTTP transfers, it will pick the correct method -based on the option so you should only rarely have to explicitly ask for -it with `-X`. It should also be noted that when curl follows redirects like -asked to with `-L`, the request method set with `-X` will be sent even on the -subsequent redirects. +command line that would otherwise do a GET causes curl to hang, waiting for a +response body that does not come. + +When asking curl to perform HTTP transfers, it picks the correct method based +on the option so you should only rarely have to explicitly ask for it with +`-X`. It should also be noted that when curl follows redirects like asked to +with `-L`, the request method set with `-X` is sent even on the subsequent +redirects. diff --git a/http/modify/referer.md b/http/modify/referer.md index c8a250ed0b..1fc7a2ed02 100644 --- a/http/modify/referer.md +++ b/http/modify/referer.md @@ -1,9 +1,9 @@ # Referer When a user clicks on a link on a web page and the browser takes the user away -to the next URL, it will send the new URL a `Referer:` header in the new -request telling it where it came from. That is the referer header. The -`Referer:` is misspelled but that is how it is supposed to be! +to the next URL, it sends the new URL a `Referer:` header in the new request +telling it where it came from. That is the referer header. The `Referer:` is +misspelled but that is how it is supposed to be! With curl you set the referer header with `-e` or `--referer`, like this: diff --git a/http/modify/target.md b/http/modify/target.md index e078a9d52d..5c3620c960 100644 --- a/http/modify/target.md +++ b/http/modify/target.md @@ -3,7 +3,7 @@ When given an input URL such as `http://example.com/file`, the path section of the URL gets extracted and is turned into `/file` in the HTTP request line. That item in the protocol is called the *request target* in HTTP. That is the -resource this request will interact with. Normally this request target is +resource this request interacts with. Normally this request target is extracted from the URL and then used in the request and as a user you do not need to think about it. @@ -28,10 +28,10 @@ The path part of the URL is the part that starts with the first slash after the host name and ends either at the end of the URL or at a '?' or '#' (roughly speaking). -If you include substrings including `/../` or `/./` in the path, curl will -automatically squash them before the path is sent to the server, as is +If you include substrings including `/../` or `/./` in the path, curl +automatically squashes them before the path is sent to the server, as is dictated by standards and how such strings tend to work in local file -systems. The `/../` sequence will remove the previous section so that +systems. The `/../` sequence removes the previous section so that `/hello/sir/../` ends up just `/hello/` and `/./` is simply removed so that `/hello/./sir/` becomes `/hello/sir/`. diff --git a/http/modify/user-agent.md b/http/modify/user-agent.md index 879647cc18..f4e276bfd9 100644 --- a/http/modify/user-agent.md +++ b/http/modify/user-agent.md @@ -1,8 +1,8 @@ # User-agent The User-Agent is a header that each client can set in the request to inform -the server which user-agent it is. Sometimes servers will look at this header -and determine how to act based on its contents. +the server which user-agent it is. Sometimes servers look at this header and +determine how to act based on its contents. The default header value is 'curl/[version]', as in `User-Agent: curl/7.54.1` for curl version 7.54.1. diff --git a/http/multipart.md b/http/multipart.md index 9caa089456..8066d51402 100644 --- a/http/multipart.md +++ b/http/multipart.md @@ -17,9 +17,8 @@ Which could look something like this in a web browser: ![a multipart form](multipart-form.png) -A user can fill in text in the 'Name' field and by pressing the 'Browse' -button a local file can be selected that will be uploaded when 'Submit' is -pressed. +A user can fill in text in the 'Name' field and by pressing the `Browse` +button a local file can be selected that is uploaded when `Submit` is pressed. ## Sending such a form with curl @@ -59,10 +58,10 @@ chapter. The **Content-Type** header is a bit special. It tells that this is a multipart formpost and then it sets the "boundary" string. The boundary string is a line of characters with a bunch of random digits somewhere in it, that -serves as a separator between the different parts of the form that will be +serves as a separator between the different parts of the form that is submitted. The particular boundary you see in this example has the random part -`d74496d66958873e` but you will, of course, get something different when you -run curl (or when you submit such a form with a browser). +`d74496d66958873e` but you, of course, get something different when you run +curl (or when you submit such a form with a browser). So after that initial set of headers follows the request body @@ -86,10 +85,10 @@ The last boundary string has two extra dashes `--` appended to signal the end. ## Content-Type -POSTing with curl's -F option will make it include a default Content-Type +POSTing with curl's `-F` option makes it include a default `Content-Type` header in its request, as shown in the above example. This says `multipart/form-data` and then specifies the MIME boundary string. That -content-type is the default for multipart formposts but you can, of course, +`Content-Type` is the default for multipart formposts but you can, of course, still modify that for your own commands and if you do, curl is clever enough to still append the boundary magic to the replaced header. You cannot really alter the boundary string, since curl needs that for producing the POST diff --git a/http/post/binary.md b/http/post/binary.md index 7bfa6c83a4..db7afb4051 100644 --- a/http/post/binary.md +++ b/http/post/binary.md @@ -1,6 +1,6 @@ # Posting binary -When reading data to post from a file, `-d` will strip out carriage return and +When reading data to post from a file, `-d` strips out carriage returns and newlines. Use `--data-binary` if you want curl to read and use the given file in binary exactly as given: diff --git a/http/post/browsersends.md b/http/post/browsersends.md index d35b4acd4b..7823105f2a 100644 --- a/http/post/browsersends.md +++ b/http/post/browsersends.md @@ -6,11 +6,11 @@ it and check in the browser's network development tools exactly what it sent. A slightly different way is to save the HTML page containing the form, and then edit that HTML page to redirect the 'action=' part of the form to your own server or a test server that just outputs exactly what it gets. Completing -that form submission will then show you exactly how a browser sends it. +that form submission shows you exactly how a browser sends it. -A third option is, of course, to use a network capture tool such as Wireshark to -check exactly what is sent over the wire. If you are working with HTTPS, you -cannot see form submissions in clear text on the wire but instead you need to -make sure you can have Wireshark extract your TLS private key from your +A third option is, of course, to use a network capture tool such as Wireshark +to check exactly what is sent over the wire. If you are working with HTTPS, +you cannot see form submissions in clear text on the wire but instead you need +to make sure you can have Wireshark extract your TLS private key from your browser. See the Wireshark documentation for details on doing that. diff --git a/http/post/chunked.md b/http/post/chunked.md index c54294d482..a5886c880c 100644 --- a/http/post/chunked.md +++ b/http/post/chunked.md @@ -2,9 +2,9 @@ When talking to an HTTP 1.1 server, you can tell curl to send the request body without a `Content-Length:` header upfront that specifies exactly how big the -POST is. By insisting on curl using chunked Transfer-Encoding, curl will send -the POST chunked piece by piece in a special style that also sends the size -for each such chunk as it goes along. +POST is. By insisting on curl using chunked Transfer-Encoding, curl sends the +POST chunked piece by piece in a special style that also sends the size for +each such chunk as it goes along. You send a chunked POST with curl like this: diff --git a/http/post/content-type.md b/http/post/content-type.md index db900da39b..561d993abd 100644 --- a/http/post/content-type.md +++ b/http/post/content-type.md @@ -1,8 +1,8 @@ # Content-Type -POSTing with curl's `-d` option will make it include a default header that -looks like `Content-Type: application/x-www-form-urlencoded`. That is what -your typical browser will use for a plain POST. +POSTing with curl's `-d` option makes it include a default header that looks +like `Content-Type: application/x-www-form-urlencoded`. That is what your +typical browser uses for a plain POST. Many receivers of POST data do not care about or check the Content-Type header. diff --git a/http/post/expect100.md b/http/post/expect100.md index 242872f0f6..6a7c0a7aff 100644 --- a/http/post/expect100.md +++ b/http/post/expect100.md @@ -11,20 +11,20 @@ One example of when this can happen is when you send a large file over HTTP, only to discover that the server requires authentication and immediately sends back a 401 response code. -The mitigation that exists to make this scenario less frequent is to have -curl pass on an extra header, `Expect: 100-continue`, which gives the server a +The mitigation that exists to make this scenario less frequent is to have curl +pass on an extra header, `Expect: 100-continue`, which gives the server a chance to deny the request before a lot of data is sent off. curl sends this -Expect: header by default if the POST it will do is known or suspected to be -larger than just minuscule. curl also does this for PUT requests. +`Expect:` header by default if the POST it does is known or suspected to be +larger than one megabyte. curl also does this for PUT requests. When a server gets a request with an 100-continue and deems the request fine, -it will respond with a 100 response that makes the client continue. If the -server does not like the request, it sends back response code for the error it -thinks it is. +it responds with a 100 response that makes the client continue. If the server +does not like the request, it sends back response code for the error it thinks +it is. Unfortunately, lots of servers in the world do not properly support the -Expect: header or do not handle it correctly, so curl will only wait 1000 -milliseconds for that first response before it will continue anyway. +Expect: header or do not handle it correctly, so curl only waits 1000 +milliseconds for that first response before it continues anyway. You can change the amount of time curl waits for a response to Expect by using `--expect100-timeout `. You can avoid the wait entirely by using @@ -32,9 +32,9 @@ You can change the amount of time curl waits for a response to Expect by using curl -H Expect: -d "payload to send" http://example.com -In some situations, curl will inhibit the use of the Expect header if the -content it is about to send is small (like below one kilobyte), as having to -waste such a small chunk of data is not considered much of a problem. +In some situations, curl inhibits the use of the `Expect` header if the +content it is about to send is small (below one megabyte), as having to waste +such a small chunk of data is not considered much of a problem. ## HTTP/2 and later diff --git a/http/post/json.md b/http/post/json.md index 675c032baf..4a50327176 100644 --- a/http/post/json.md +++ b/http/post/json.md @@ -9,7 +9,7 @@ a single option that replaces these three: --header "Accept: application/json" This option does not make curl actually understand or know about the JSON data -it sends, but it makes it easier to send it. curl will not touch or parse the +it sends, but it makes it easier to send it. curl does not touch or parse the data that it sends, so you need to make sure it is valid JSON yourself. Send a basic JSON object to a server: @@ -26,8 +26,8 @@ Send JSON passed to curl on stdin: You can use multiple `--json` options on the same command line. This makes curl concatenate the contents from the options and send all data in one go to -the server. Note that the concatenation is plain text based and will not in -any way merge the JSON object as per JSON. +the server. Note that the concatenation is plain text based and it does not +merge the JSON objects as per JSON. Send JSON from a file and concatenate a string to the end: diff --git a/http/post/simple.md b/http/post/simple.md index 32707ee9d8..8d597a7e0b 100644 --- a/http/post/simple.md +++ b/http/post/simple.md @@ -1,15 +1,15 @@ # Simple POST -To send form data, a browser will URL-encode it as a series of `name=value` -pairs separated by ampersand (`&`) symbols. The resulting string is sent as -the body of a POST request. To do the same with curl, use the `-d` (or -`--data`) argument, like this: +To send form data, a browser URL encodes it as a series of `name=value` pairs +separated by ampersand (`&`) symbols. The resulting string is sent as the body +of a POST request. To do the same with curl, use the `-d` (or `--data`) +argument, like this: curl -d 'name=admin&shoesize=12' http://example.com/ -When specifying multiple `-d` options on the command line, curl will -concatenate them and insert ampersands in between, so the above example could -also be written like this: +When specifying multiple `-d` options on the command line, curl concatenates +them and insert ampersands in between, so the above example could also be +written like this: curl -d name=admin -d shoesize=12 http://example.com/ @@ -21,7 +21,7 @@ command line, you can also read it from a filename in standard curl style: While the server might assume that the data is encoded in some special way, curl does not encode or change the data you tell it to send. **curl sends exactly the bytes you give it** (except that when reading from a file. `-d` -will skip over the carriage returns and newlines so you need to use +skips over the carriage returns and newlines so you need to use `--data-binary` if you rather intend them to be included in the data.). To send a POST body that starts with a `@` symbol, to avoid that curl tries to diff --git a/http/post/url-encode.md b/http/post/url-encode.md index 10c1764c3a..6c8a92493c 100644 --- a/http/post/url-encode.md +++ b/http/post/url-encode.md @@ -18,25 +18,23 @@ options. To be CGI-compliant, the **data** part should begin with a name followed by a separator and a content specification. The **data** part can be passed to curl using one of the following syntaxes: - - `content`: This will make curl URL encode the content and pass that - on. Just be careful so that the content does not contain any `=` or `@` - symbols, as that will then make the syntax match one of the other cases - below! + - `content`: URL encode the content and pass that on. Just be careful so that + the content does not contain any `=` or `@` symbols, as that then makes the + syntax match one of the other cases below! - - `=content`: This will make curl URL encode the content and pass that - on. The initial `=` symbol is not included in the data. + - `=content`: URL encode the content and pass that on. The initial `=` symbol + is not included in the data. - - `name=content`: This will make curl URL encode the content part and pass - that on. Note that the name part is expected to be URL encoded already. + - `name=content`: URL encode the content part and pass that on. Note that the + name part is expected to be URL encoded already. - - `@filename`: This will make curl load data from the given file (including - any newlines), URL encode that data and pass it on in the POST. + - `@filename`: load data from the given file (including any newlines), URL + encode that data and pass it on in the POST. - - `name@filename`: This will make curl load data from the given file - (including any newlines), URL encode that data and pass it on in the POST. - The name part gets an equal sign appended, resulting in - `name=urlencoded-file-content`. Note that the name is expected to be URL - encoded already. + - `name@filename`: load data from the given file (including any newlines), + URL encode that data and pass it on in the POST. The name part gets an + equal sign appended, resulting in `name=urlencoded-file-content`. Note that + the name is expected to be URL encoded already. As an example, you could POST a name to have it encoded by curl: @@ -55,6 +53,6 @@ you can tell curl to send that contents URL encoded using the field name In both these examples above, the field name is not URL encoded but is passed on as-is. If you want to URL encode the field name as well, like if you want to pass on a field name called `user name`, you can ask curl to encode the -entire string by prefixing it with an equals sign (that will not get sent): +entire string by prefixing it with an equals sign (that does not get sent): curl --data-urlencode "=user name=John Doe (Junior)" http://example.com diff --git a/http/postvspost.md b/http/postvspost.md index 64871e4c5a..028adc7818 100644 --- a/http/postvspost.md +++ b/http/postvspost.md @@ -31,12 +31,12 @@ and you perform one with curl's `-d` and friends. ## POST outside of HTML -POST is a regular HTTP method and there is no requirement that it be -triggered by HTML or involve a browser. Lots of services, APIs and other systems -allow you to pass in data these days in order to get things done. +POST is a regular HTTP method and there is no requirement that it be triggered +by HTML or involve a browser. Lots of services, APIs and other systems allow +you to pass in data these days in order to get things done. If these services expect plain "raw" data or perhaps data formatted as JSON or similar, you want the [regular POST](post.md) approach. curl's `-d` option -does not alter or encode the data at all but will just send exactly what you -tell it to. Just pay attention that `-d` sets a default `Content-Type:` that -might not be what you want. +does not alter or encode the data at all but just sends exactly what you tell +it to. Just pay attention that `-d` sets a default `Content-Type:` that might +not be what you want. diff --git a/http/put.md b/http/put.md index 10d4e89447..ae891d001c 100644 --- a/http/put.md +++ b/http/put.md @@ -11,9 +11,8 @@ identifies the resource and you point out the local file to put there: curl -T localfile http://example.com/new/resource/file -…so -T will imply a PUT and tell curl which file to send off. But the -similarities between POST and PUT also allows you to send a PUT with a string -by using the regular curl POST mechanism using `-d` but asking for it to use a -PUT instead: +`-T` implies a PUT and tell curl which file to send off. But the similarities +between POST and PUT also allows you to send a PUT with a string by using the +regular curl POST mechanism using `-d` but asking for it to use a PUT instead: curl -d "data to PUT" -X PUT http://example.com/new/resource/file diff --git a/http/ranges.md b/http/ranges.md index dc73a83985..73ebef1351 100644 --- a/http/ranges.md +++ b/http/ranges.md @@ -6,14 +6,14 @@ to ask for only a specific data range. The client asks the server for the specific range with a start offset and an end offset. It can even combine things and ask for several ranges in the same request by just listing a bunch of pieces next to each other. When a server sends back multiple independent -pieces to answer such a request, you will get them separated with mime -boundary strings and it will be up to the user application to handle that -accordingly. curl will not further separate such a response. +pieces to answer such a request, you get them separated with mime boundary +strings and it is up to the user application to handle that accordingly. curl +does not further separate such a response. However, a byte range is only a request to the server. It does not have to respect the request and in many cases, like when the server automatically -generates the contents on the fly when it is being asked, it will simply refuse -to do it and it then instead responds with the full contents anyway. +generates the contents on the fly when it is being asked, it simply refuses to +do it and it then instead responds with the full contents anyway. You can make curl ask for a range with `-r` or `--range`. If you want the first 200 bytes out of something: diff --git a/http/redirects.md b/http/redirects.md index 0cabcd8466..676000df94 100644 --- a/http/redirects.md +++ b/http/redirects.md @@ -17,25 +17,25 @@ ask for, which can be absolute or relative. ## Permanent and temporary -Is the redirect meant to last or just remain valid for now? If you want a GET to -permanently redirect users to resource B with another GET, send -back a 301. It also means that the user-agent (browser) is meant to cache this -and keep going to the new URI from now on when the original URI is requested. +Is the redirect meant to last or just remain valid for now? If you want a GET +to permanently redirect users to resource B with another GET, send back +a 301. It also means that the user-agent (browser) is meant to cache this and +keep going to the new URI from now on when the original URI is requested. The temporary alternative is 302. Right now the server wants the client to send a GET request to B, but it should not cache this but keep trying the original URI when directed to it next time. -Note that both 301 and 302 will make browsers do a GET in the next request, -which possibly means changing the method if it started with a POST (and only if +Note that both 301 and 302 make browsers do a GET in the next request, which +possibly means changing the method if it started with a POST (and only if POST). This changing of the HTTP method to GET for 301 and 302 responses is said to be “for historical reasons”, but that’s still what browsers do so most -of the public web will behave this way. +of the public web behaves this way. -In practice, the 303 code is similar to 302. It will not be cached and it will -make the client issue a GET in the next request. The differences between a 302 -and 303 are subtle, but 303 seems to be more designed for an “indirect -response” to the original request rather than just a redirect. +In practice, the 303 code is similar to 302. It is not be cached and it makes +the client issue a GET in the next request. The differences between a 302 and +303 are subtle, but 303 seems to be more designed for an “indirect response” +to the original request rather than just a redirect. These three codes were the only redirect codes in the HTTP/1.0 spec. @@ -48,17 +48,17 @@ In curl's tradition of only doing the basics unless you tell it differently, it does not follow HTTP redirects by default. Use the `-L, --location` option to tell it to do that. -When following redirects is enabled, curl will follow up to 50 redirects by -default. There is a maximum limit mostly to avoid the risk of getting caught in -endless loops. If 50 is not sufficient for you, you can change the maximum +When following redirects is enabled, curl follows up to 30 redirects by +default. There is a maximum limit mostly to avoid the risk of getting caught +in endless loops. If 30 is not sufficient for you, you can change the maximum number of redirects to follow with the `--max-redirs` option. ## GET or POST? -All three of these response codes, 301 and 302/303, will assume that the -client sends a GET to get the new URI, even if the client might have sent a -POST in the first request. This is important, at least if you do something -that does not use GET. +All three of these response codes, 301 and 302/303, assume that the client +sends a GET to get the new URI, even if the client might have sent a POST in +the first request. This is important, at least if you do something that does +not use GET. If the server instead wants to redirect the client to a new URI and wants it to send the same method in the second request as it did in the first, like if @@ -73,7 +73,7 @@ published in June 2014) so older clients may not treat it correctly! If so, then the only response code left for you is… The (older) response code to tell a client to send a POST also in the next -request but temporarily is 307. This redirect will not be cached by the client +request but temporarily is 307. This redirect is not be cached by the client though, so it’ll again post to A if requested to again. The 307 code was introduced in HTTP/1.1. @@ -90,7 +90,7 @@ It turns out that there are web services out there in the world that want a POST sent to the original URL, but are responding with HTTP redirects that use a 301, 302 or 303 response codes and *still* want the HTTP client to send the next request as a POST. As explained above, browsers won’t do that and neither -will curl—by default. +does curl by default. Since these setups exist, and they’re actually not terribly rare, curl offers options to alter its behavior. @@ -134,4 +134,4 @@ and a full runtime that allows code to execute in the browser when visiting websites. JavaScript also provides means for it to instruct the browser to move on to -another site—a redirect, if you will. +another site - a redirect, if you will. diff --git a/http/response.md b/http/response.md index 14d309bc66..2f88637ecd 100644 --- a/http/response.md +++ b/http/response.md @@ -1,8 +1,8 @@ # Responses -When an HTTP client talks HTTP to a server, the server *will* respond with an -HTTP response message or curl will consider it an error and returns 52 with -the error message "Empty reply from server". +When an HTTP client talks HTTP to a server, the server responds with an HTTP +response message or curl considers it an error and returns 52 with the error +message `Empty reply from server`. ## Size of an HTTP response @@ -41,8 +41,8 @@ Remember that you can use curl's `--write-out` option to extract the response code. See the [--write-out](../usingcurl/verbose/writeout.md) section. To make curl return an error for response codes >= 400, you need to use -`--fail` or `--fail-with-body`. Then curl will exit with error code 22 for -such occurrences. +`--fail` or `--fail-with-body`. Then curl exits with error code 22 for such +occurrences. ## CONNECT response codes @@ -70,8 +70,7 @@ response has ended even though the server did not know the full size before it started to send it. This is usually the case when the response is dynamic and generated at the point when the request comes. -Clients like curl will, of course, decode the chunks and not show the chunk -sizes to users. +Clients like curl decode the chunks and do not show the chunk sizes to users. ## Gzipped transfers @@ -100,9 +99,9 @@ to allow transparent compression as a transfer encoding, and curl supports this feature. The client then simply asks the server to do compression transfer encoding and -if acceptable, it will respond with a header indicating that it will and curl -will then transparently uncompress that data on arrival. A user enables asking -for compressed transfer encoding with `--tr-encoding`: +if acceptable, it responds with a header indicating that it does and curl then +transparently decompresses that data on arrival. A curl user asks for a +compressed transfer encoding with `--tr-encoding`: curl --tr-encoding http://example.com/ diff --git a/http/versions/http09.md b/http/versions/http09.md index 13ed50c448..e6fcc100e6 100644 --- a/http/versions/http09.md +++ b/http/versions/http09.md @@ -4,9 +4,9 @@ The HTTP version used before HTTP/1.0 was made available is often referred to as HTTP/0.9. Back in those days, HTTP responses had no headers as they would only return a response body and then immediately close the connection. -curl can be told to support such responses but will by default not recognize -them, for security reasons. Almost anything bad will look like an HTTP/0.9 -response to curl so the option needs to be used with caution. +curl can be told to support such responses but by default it does not +recognize them, for security reasons. Almost anything bad looks like an +HTTP/0.9 response to curl so the option needs to be used with caution. The HTTP/0.9 option to curl is different than the other HTTP command line options for HTTP versions mentioned above as this controls what response to diff --git a/http/versions/http2.md b/http/versions/http2.md index 05788e4694..78bb265ed4 100644 --- a/http/versions/http2.md +++ b/http/versions/http2.md @@ -1,25 +1,24 @@ # HTTP/2 curl supports HTTP/2 for both HTTP:// and HTTPS:// URLs assuming that curl was -built with the proper prerequisites. It will even default to using HTTP/2 when -given an HTTPS URL since doing so implies no penalty and when curl is used with -sites that do not support HTTP/2 the request will instead negotiate HTTP/1.1. +built with the proper prerequisites. It even defaults to using HTTP/2 when +given an HTTPS URL since doing so implies no penalty and when curl is used +with sites that do not support HTTP/2 the request instead negotiates HTTP/1.1. With HTTP:// URLs however, the upgrade to HTTP/2 is done with an `Upgrade:` header that may cause an extra round-trip and perhaps even more troublesome, a -sizable share of old servers will return a 400 response when seeing such a -header. +sizable share of old servers returns a 400 response when seeing such a header. It should also be noted that some (most?) servers that support HTTP/2 for -HTTP:// (which in itself is not all servers) will not acknowledge the +`HTTP://` (which in itself is not all servers) do not acknowledge the `Upgrade:` header on POST, for example. To ask a server to use HTTP/2, just: curl --http2 http://example.com/ -If your curl does not support HTTP/2, that command line will return an error -saying so. Running `curl -V` will show if your version of curl supports it. +If your curl does not support HTTP/2, that command line tool returns an error +saying so. Running `curl -V` shows if your version of curl supports it. If you by some chance already know that your server speaks HTTP/2 (for example, within your own controlled environment where you know exactly what runs in diff --git a/http/versions/http3.md b/http/versions/http3.md index e00a971014..e5f7453c74 100644 --- a/http/versions/http3.md +++ b/http/versions/http3.md @@ -10,9 +10,8 @@ negotiate a dedicated connection for it. HTTP/3 is the HTTP version that is designed to communicate over QUIC. QUIC can for most particular purposes be considered a TCP+TLS replacement. -All transfers that use HTTP/3 will therefore not use TCP. They will use QUIC. -QUIC is a reliable transport protocol built over UDP. HTTP/3 implies use of -QUIC. +All transfers that use HTTP/3 therefore do not use TCP. They use QUIC. QUIC is +a reliable transport protocol built over UDP. HTTP/3 implies use of QUIC. ## HTTPS only @@ -44,20 +43,20 @@ bootstrap into HTTP/3 for a server. Note that you need that feature built-in and that it does not switch to HTTP/3 for the *current* request unless the alt-svc cache is already populated, but -it will rather store the info for use in the *next* request to the host. +it rather stores the info for use in the *next* request to the host. ## When QUIC is denied -A certain amount of QUIC connection attempts will fail, partly because many +A certain amount of QUIC connection attempts fail, partly because many networks and hosts block or throttle the traffic. -When `--http3` is used, curl will start a second transfer attempt a few -hundred milliseconds after the QUIC connection is initiated which is using -HTTP/2 or HTTP/1, so that if the connection attempt over QUIC fails or turns -out to be unbearably slow, the connection using an older HTTP version can -still succeed and perform the transfer. This allows users to use `--http3` -with some amount of confidence that the operation will work. +When `--http3` is used, curl starts a second transfer attempt a few hundred +milliseconds after the QUIC connection is initiated which is using HTTP/2 or +HTTP/1, so that if the connection attempt over QUIC fails or turns out to be +unbearably slow, the connection using an older HTTP version can still succeed +and perform the transfer. This allows users to use `--http3` with some amount +of confidence that the operation works. `--http3-only` is provided to explicitly *not* try any older version in -parallel, but will thus make the transfer fail immediately if no QUIC -connection can be established. +parallel, but thus makes the transfer fail immediately if no QUIC connection +can be established. diff --git a/internals/content-encoding.md b/internals/content-encoding.md index f53c0c26b5..944ef9f474 100644 --- a/internals/content-encoding.md +++ b/internals/content-encoding.md @@ -39,12 +39,12 @@ Currently, libcurl does support multiple encodings but only understands how to process responses that use the `deflate`, `gzip`, `zstd` and/or `br` content encodings, so the only values for [`CURLOPT_ACCEPT_ENCODING`][5] that - will work (besides `identity`, which does nothing) are `deflate`, `gzip`, - `zstd` and `br`. If a response is encoded using the `compress` or methods, - libcurl will return an error indicating that the response could not be - decoded. If `` is NULL no `Accept-Encoding` header is generated. If - `` is a zero-length string, then an `Accept-Encoding` header - containing all supported encodings will be generated. + work (besides `identity`, which does nothing) are `deflate`, `gzip`, `zstd` + and `br`. If a response is encoded using the `compress` or methods, libcurl + returns an error indicating that the response could not be decoded. If + `` is NULL no `Accept-Encoding` header is generated. If `` is + a zero-length string, then an `Accept-Encoding` header containing all + supported encodings is generated. The [`CURLOPT_ACCEPT_ENCODING`][5] must be set to any non-NULL value for content to be automatically decoded. If it is not set and the server still diff --git a/internals/handler.md b/internals/handler.md index 83eba55230..1b1ae29936 100644 --- a/internals/handler.md +++ b/internals/handler.md @@ -1,9 +1,9 @@ # Protocol handler libcurl is a multi-protocol transfer library. The core of the code is a set of -generic functions that are used for transfers in general and will mostly work -the same for all protocols. The main state machine described above for example -is there and works for all protocols - even though some protocols may not make +generic functions that are used for transfers in general and mostly works the +same for all protocols. The main state machine described above for example is +there and works for all protocols - even though some protocols may not make use of all states for all transfers. However, each different protocol libcurl speaks also has its unique diff --git a/internals/memory-debugging.md b/internals/memory-debugging.md index ee44b0ac33..13f049b7d2 100644 --- a/internals/memory-debugging.md +++ b/internals/memory-debugging.md @@ -22,11 +22,11 @@ switched on by running configure with `--enable-curldebug`. Use `-DDEBUGBUILD` when compiling to enable a debug build or run configure with `--enable-debug`. -`curl --version` will list 'Debug' feature for debug enabled builds, and will -list 'TrackMemory' feature for curl debug memory tracking capable -builds. These features are independent and can be controlled when running the -configure script. When `--enable-debug` is given both features will be -enabled, unless some restriction prevents memory tracking from being used. +`curl --version` lists the `Debug` feature for debug enabled builds, and lists +the `TrackMemory` feature for curl debug memory tracking capable builds. These +features are independent and can be controlled when running the configure +script. When `--enable-debug` is given both features get enabled, unless some +restriction prevents memory tracking from being used. ## Track Down Memory Leaks @@ -43,11 +43,11 @@ first choice. Rebuild libcurl with `-DCURLDEBUG` (usually, rerunning configure with `--enable-debug` fixes this). `make clean` first, then `make` so that all - files are actually rebuilt properly. It will also make sense to build - libcurl with the debug option (usually `-g` to the compiler) so that - debugging it will be easier if you actually do find a leak in the library. + files are actually rebuilt properly. It also makes sense to build libcurl + with the debug option (usually `-g` to the compiler) so that debugging it + gets easier if you actually do find a leak in the library. - This will create a library that has memory debugging enabled. + This builds a library that has memory debugging enabled. ### Modify Your Application @@ -57,7 +57,7 @@ first choice. curl_dbg_memdebug("dump"); ``` - This will make the malloc debug system output a full trace of all resources + This makes the malloc debug system output a full trace of all resources using functions to the given file name. Make sure you rebuild your program and that you link with the same libcurl you built for this purpose as described above. diff --git a/internals/multi.md b/internals/multi.md index ae1a5a9456..54aa9bf1eb 100644 --- a/internals/multi.md +++ b/internals/multi.md @@ -11,6 +11,6 @@ never waiting for data in loop or similar. Unless they are the "surface" functions that have that expressed functionality. The function `curl_easy_perform()` which performs a single transfer -synchronously, is itself just a wrapper function that internally will setup -and use the multi interface itself. +synchronously, is itself just a wrapper function that internally setups and +uses the multi interface itself. diff --git a/internals/resolving.md b/internals/resolving.md index 415adc7749..4eaa459496 100644 --- a/internals/resolving.md +++ b/internals/resolving.md @@ -19,9 +19,9 @@ ## `CURLRES_THREADED` is defined if libcurl is built to use threading for asynchronous name - resolves. The name resolve will be done in a new thread, and the supported - asynch API will be the same as for ares-builds. This is the default under - (native) Windows. + resolves. The name resolve is done in a new thread, and the supported asynch + API is be the same as for ares-builds. This is the default under (native) + Windows. If any of the two previous are defined, `CURLRES_ASYNCH` is defined too. If libcurl is not built to use an asynchronous resolver, `CURLRES_SYNCH` is diff --git a/internals/statemachines.md b/internals/statemachines.md index 326a96ef0d..95955e31b7 100644 --- a/internals/statemachines.md +++ b/internals/statemachines.md @@ -15,7 +15,7 @@ One of the primary states is the main transfer "mode" the easy handle holds, which says if the current transfer is resolving, waiting for a resolve, connecting, waiting for a connect, issuing a request, doing a transfer etc (see the `CURLMstate` enum in `lib/multihandle.h`). Every transfer done with -libcurl has an associated easy handle and every easy handle will exercise that +libcurl has an associated easy handle and every easy handle exercises that state machine. The image below shows all states and possible state transitions. See further diff --git a/internals/structs.md b/internals/structs.md index 22b13c5db8..57c07577df 100644 --- a/internals/structs.md +++ b/internals/structs.md @@ -1,8 +1,8 @@ # Structs -This section documents internal structs. Since they are truly internal, we can -and will change them occasionally which might make this section slightly out -of date at times. +This section documents internal structs. Since they are truly internal, we +change them occasionally which might make this section slightly out of date at +times. ## Curl_easy @@ -11,22 +11,22 @@ of date at times. in API documentations and examples. Information and state that is related to the actual connection is in the - `connectdata` struct. When a transfer is about to be made, libcurl will - either create a new connection or re-use an existing one. The current - connectdata that is used by this handle is pointed out by `Curl_easy->conn`. + `connectdata` struct. When a transfer is about to be made, libcurl either + creates a new connection or re-uses an existing one. The current connectdata + that is used by this handle is pointed out by `Curl_easy->conn`. Data and information that regard this particular single transfer is put in the `SingleRequest` sub-struct. When the `Curl_easy` struct is added to a multi handle, as it must be in - order to do any transfer, the `->multi` member will point to the - `Curl_multi` struct it belongs to. The `->prev` and `->next` members will - then be used by the multi code to keep a linked list of `Curl_easy` structs - that are added to that same multi handle. libcurl always uses multi so - `->multi` *will* point to a `Curl_multi` when a transfer is in progress. + order to do any transfer, the `->multi` member points to the `Curl_multi` + struct it belongs to. The `->prev` and `->next` members are then used by the + multi code to keep a linked list of `Curl_easy` structs that are added to + that same multi handle. libcurl always uses multi so `->multi` points to a + `Curl_multi` when a transfer is in progress. `->mstate` is the multi state of this particular `Curl_easy`. When - `multi_runsingle()` is called, it will act on this handle according to which + `multi_runsingle()` is called, it acts on this handle according to which state it is in. The mstate is also what tells which sockets to return for a specific `Curl_easy` when [`curl_multi_fdset()`][12] is called etc. @@ -40,14 +40,13 @@ of date at times. ## connectdata A general idea in libcurl is to keep connections around in a connection - "cache" after they have been used in case they will be used again and then + "cache" after they have been used in case they are used again and then re-use an existing one instead of creating a new one as it creates a significant performance boost. Each `connectdata` struct identifies a single physical connection to a - server. If the connection cannot be kept alive, the connection will be - closed after use and then this struct can be removed from the cache and - freed. + server. If the connection cannot be kept alive, the connection is closed + after use and then this struct can be removed from the cache and freed. Thus, the same `Curl_easy` can be used multiple times and each time select another `connectdata` struct to use for the connection. Keep this in mind, @@ -127,13 +126,13 @@ of date at times. - `->setup_connection` is called to allow the protocol code to allocate protocol specific data that then gets associated with that `Curl_easy` for the rest of this transfer. It gets freed again at the end of the transfer. - It will be called before the `connectdata` for the transfer has been - selected/created. Most protocols will allocate its private `struct - [PROTOCOL]` here and assign `Curl_easy->req.p.[protocol]` to it. + It gets called before the `connectdata` for the transfer has been + selected/created. Most protocols allocate its private `struct [PROTOCOL]` + here and assign `Curl_easy->req.p.[protocol]` to it. - `->connect_it` allows a protocol to do some specific actions after the TCP connect is done, that can still be considered part of the connection - phase. Some protocols will alter the `connectdata->recv[]` and + phase. Some protocols alter the `connectdata->recv[]` and `connectdata->send[]` function pointers in this function. - `->connecting` is similarly a function that keeps getting called as long @@ -177,19 +176,19 @@ of date at times. "HTTP|HTTPS". - `->flags` is a bitmask with additional information about the protocol that - will make it get treated differently by the generic engine: - - `PROTOPT_SSL` - will make it connect and negotiate SSL + makes it get treated differently by the generic engine: + - `PROTOPT_SSL` - makes it connect and negotiate SSL - `PROTOPT_DUAL` - this protocol uses two connections - `PROTOPT_CLOSEACTION` - this protocol has actions to do before closing the connection. This flag is no longer used by code, yet still set for a bunch of protocol handlers. - `PROTOPT_DIRLOCK` - "direction lock". The SSH protocols set this bit to - limit which "direction" of socket actions that the main engine will - concern itself with. + limit which "direction" of socket actions that the main engine concerns + itself with. - `PROTOPT_NONETWORK` - a protocol that does not use the network (read `file:`) - - `PROTOPT_NEEDSPWD` - this protocol needs a password and will use a - default one unless one is provided + - `PROTOPT_NEEDSPWD` - this protocol needs a password and uses a default + one unless one is provided - `PROTOPT_NOURLQUERY` - this protocol cannot handle a query part on the URL (?foo=bar) @@ -206,8 +205,7 @@ of date at times. The idea is that the struct can have a set of its own versions of caches and pools and then by providing this struct in the `CURLOPT_SHARE` option, those - specific `Curl_easy`s will use the caches/pools that this share handle - holds. + specific `Curl_easy`s use the caches/pools that this share handle holds. Then individual `Curl_easy` structs can be made to share specific things that they otherwise would not, such as cookies. diff --git a/internals/tests/file-format.md b/internals/tests/file-format.md index 476968dfad..8e3d55e6e7 100644 --- a/internals/tests/file-format.md +++ b/internals/tests/file-format.md @@ -97,7 +97,7 @@ For example, to insert the word hello a 100 times: Lines in the test file can be made to appear conditionally on a specific feature (see the "features" section below) being set or not set. If the -specific feature is present, the following lines will be output, otherwise it +specific feature is present, the following lines are output, otherwise it outputs nothing, until a following `else` or `endif` clause. Like this: %if brotli @@ -125,7 +125,7 @@ conditional at a time and you can only check for a single feature in it. ## Variables -When the test is preprocessed, a range of "variables" in the test file will be +When the test is preprocessed, a range of variables in the test file are replaced by their content at that time. Available substitute variables include: @@ -199,13 +199,13 @@ requests curl sends been run ended up correctly Each main section supports a number of available *sub-tags* that can be -specified, that will be checked/used if specified. +specified, that are checked/used if specified. ## `` ### `` A newline-separated list of keywords describing what this test case uses and -tests. Try to use already used keywords. These keywords will be used for +tests. Try to use already used keywords. These keywords are used for statistical/informational purposes and for choosing or skipping classes of tests. "Keywords" must begin with an alphabetic character, "-", "[" or "{" and may consist of multiple words separated by spaces which are treated together @@ -222,30 +222,30 @@ arrived safely. Set `nocheck="yes"` to prevent the test script from verifying the arrival of this data. If the data contains `swsclose` anywhere within the start and end tag, and -this is an HTTP test, then the connection will be closed by the server after -this response is sent. If not, the connection will be kept persistent. +this is an HTTP test, then the connection is closed by the server after this +response is sent. If not, the connection is kept persistent. If the data contains `swsbounce` anywhere within the start and end tag, the -HTTP server will detect if this is a second request using the same test and -part number and will then increase the part number with one. This is useful -for auth tests and similar. +HTTP server detects if this is a second request using the same test and part +number and then increases the part number with one. This is useful for auth +tests and similar. -`sendzero=yes` means that the (FTP) server will "send" the data even if the -size is zero bytes. Used to verify curl's behavior on zero bytes transfers. +`sendzero=yes` means that the (FTP) server "sends" the data even if the size +is zero bytes. Used to verify curl's behavior on zero bytes transfers. `base64=yes` means that the data provided in the test-file is a chunk of data encoded with base64. It is the only way a test case can contain binary data. (This attribute can in fact be used on any section, but it does not make much sense for other sections than "data"). -`hex=yes` means that the data is a sequence of hex pairs. It will get decoded -and used as "raw" data. +`hex=yes` means that the data is a sequence of hex pairs. It gets decoded and +used as "raw" data. `nonewline=yes` means that the last byte (the trailing newline character) should be cut off from the data before sending or comparing it. -For FTP file listings, the `` section will be used *only* if you make -sure that there has been a CWD done first to a directory named `test-[number]` +For FTP file listings, the `` section is used *only* if you make sure +that there has been a CWD done first to a directory named `test-[number]` where `[number]` is the test case number. Otherwise the ftp server can't know from which test file to load the list content. @@ -280,8 +280,8 @@ Address type and address details as logged by the SOCKS proxy. ### `` if the data is sent but this is what should be checked afterwards. If -`nonewline=yes` is set, runtests will cut off the trailing newline from the -data before comparing with the one actually received by the client. +`nonewline=yes` is set, runtests cuts off the trailing newline from the data +before comparing with the one actually received by the client. Use the `mode="text"` attribute if the output is in text mode on platforms that have a text/binary difference. @@ -307,9 +307,9 @@ For HTTP/HTTPS, these are supported: ### `` Special-commands for the server. -The first line of this file will always be set to `Testnum [number]` by the -test script, to allow servers to read that to know what test the client is -about to issue. +The first line of this file is always set to `Testnum [number]` by the test +script, to allow servers to read that to know what test the client is about to +issue. #### For FTP/SMTP/POP/IMAP @@ -341,7 +341,7 @@ about to issue. #### For HTTP/HTTPS - `auth_required` if this is set and a POST/PUT is made without auth, the - server will NOT wait for the full request body to get sent + server does NOT wait for the full request body to get sent - `idle` - do nothing after receiving the request, just "sit idle" - `stream` - continuously send data to the client, never-ending - `writedelay: [msecs]` delay this amount between reply packets @@ -349,9 +349,9 @@ about to issue. from a PUT or POST request - `rtp: part [num] channel [num] size [num]` - stream a fake RTP packet for the given part on a chosen channel with the given payload size -- `connection-monitor` - When used, this will log `[DISCONNECT]` to the +- `connection-monitor` - When used, this logs `[DISCONNECT]` to the `server.input` log when the connection is disconnected. -- `upgrade` - when an HTTP upgrade header is found, the server will upgrade to +- `upgrade` - when an HTTP upgrade header is found, the server upgrades to http2 - `swsclose` - instruct server to close connection after response - `no-expect` - do not read the request body if Expect: is present @@ -395,12 +395,10 @@ Enter only one server per line. This subsection is mandatory. ### `` A list of features that MUST be present in the client/library for this test to -be able to run. If a required feature is not present then the test will be -SKIPPED. +be able to run. If a required feature is not present then the test is SKIPPED. Alternatively a feature can be prefixed with an exclamation mark to indicate a -feature is NOT required. If the feature is present then the test will be -SKIPPED. +feature is NOT required. If the feature is present then the test is SKIPPED. Features testable here are: @@ -471,13 +469,12 @@ restart servers. ### `` A command line that if set gets run by the test script before the test. If an output is displayed by the command or if the return code is non-zero, the test -will be skipped and the (single-line) output will be displayed as reason for -not running the test. +gets skipped and the (single-line) output is displayed as reason for not +running the test. ### `` -A command line that if set gets run by the test script after the test. If -the command exists with a non-zero status code, the test will be considered -to have failed. +A command line that if set gets run by the test script after the test. If the +command exists with a non-zero status code, the test is considered failed. ### `` Name of tool to invoke instead of "curl". This tool must be built and exist @@ -499,15 +496,15 @@ Command line to run. Note that the URL that gets passed to the server actually controls what data that is returned. The last slash in the URL must be followed by a number. That -number (N) will be used by the test-server to load test case N and return the -data that is defined within the `` section. +number (N) is used by the test-server to load test case N and return the data +that is defined within the `` section. -If there is no test number found above, the HTTP test server will use the -number following the last dot in the given hostname (made so that a CONNECT -can still pass on test number) so that "foo.bar.123" gets treated as test case +If there is no test number found above, the HTTP test server uses the number +following the last dot in the given hostname (made so that a CONNECT can still +pass on test number) so that "foo.bar.123" gets treated as test case 123. Alternatively, if an IPv6 address is provided to CONNECT, the last -hexadecimal group in the address will be used as the test number! For example -the address "[1234::ff]" would be treated as test case 255. +hexadecimal group in the address is used as the test number! For example the +address "[1234::ff]" would be treated as test case 255. Set `type="perl"` to write the test case as a perl script. It implies that there is no memory debugging and valgrind gets shut off for this test. @@ -546,13 +543,13 @@ needed. This creates the named file with this content before the test case is run, which is useful if the test case needs a file to act on. -If `nonewline="yes"` is used, the created file will have the final newline -stripped off. +If `nonewline="yes"` is used, the created file gets the final newline stripped +off. ### `` Pass this given data on stdin to the tool. -If `nonewline` is set, we will cut off the trailing newline of this given data +If `nonewline` is set, we cut off the trailing newline of this given data before comparing with the one actually received by the client ## `` @@ -572,17 +569,17 @@ advanced. Example: `s/^EPRT .*/EPRT stripped/`. ### `` -the protocol dump curl should transmit, if `nonewline` is set, we will cut off -the trailing newline of this given data before comparing with the one actually +the protocol dump curl should transmit, if `nonewline` is set, we cut off the +trailing newline of this given data before comparing with the one actually sent by the client The `` and `` rules are applied before comparisons are made. ### `` The protocol dump curl should transmit to an HTTP proxy (when the http-proxy -server is used), if `nonewline` is set, we will cut off the trailing newline -of this given data before comparing with the one actually sent by the client -The `` and `` rules are applied before comparisons are made. +server is used), if `nonewline` is set, we cut off the trailing newline of +this given data before comparing with the one actually sent by the client The +`` and `` rules are applied before comparisons are made. ### `` This verifies that this data was passed to stderr. @@ -590,7 +587,7 @@ This verifies that this data was passed to stderr. Use the `mode="text"` attribute if the output is in text mode on platforms that have a text/binary difference. -If `nonewline` is set, we will cut off the trailing newline of this given data +If `nonewline` is set, we cut off the trailing newline of this given data before comparing with the one actually received by the client ### `` @@ -599,7 +596,7 @@ This verifies that this data was passed to stdout. Use the `mode="text"` attribute if the output is in text mode on platforms that have a text/binary difference. -If `nonewline` is set, we will cut off the trailing newline of this given data +If `nonewline` is set, we cut off the trailing newline of this given data before comparing with the one actually received by the client ### `` diff --git a/internals/tests/run.md b/internals/tests/run.md index 5eb9abfc5b..51ef79e35b 100644 --- a/internals/tests/run.md +++ b/internals/tests/run.md @@ -26,8 +26,8 @@ goes and performs the entire thing through the debugger. ## Run a specific test without valgrind -The test suite will use valgrind by default if it finds it, which is an -excellent way to find problems but it also makes the test run much -slower. Sometimes you want to do it faster: +The test suite uses valgrind by default if it finds it, which is an excellent +way to find problems but it also makes the test run much slower. Sometimes you +want to do it faster: ./runtests.pl -n 144 diff --git a/internals/tests/torture.md b/internals/tests/torture.md index cf0a627310..d53c1a6afd 100644 --- a/internals/tests/torture.md +++ b/internals/tests/torture.md @@ -21,7 +21,7 @@ This way of testing can take a seriously long time. I advise you to switch off ## Rerun a specific failure -If a single test fails, `runtests.pl` will identify exactly which "round" that +If a single test fails, `runtests.pl` identifies exactly which "round" that triggered the problem and by using the `-t` as shown, you can run a command line that when invoked *only* fails that particular fallible function. @@ -30,12 +30,11 @@ line that when invoked *only* fails that particular fallible function. To make this way of testing a little more practical, the test suite also provides a `--shallow` option. This lets the user set a maximum number of fallible functions to fail per test case. If there are more invokes to fail -than is set with this value, the script will randomly select which ones to -fail. +than is set with this value, the script randomly selects which ones to fail. As a special feature, as randomizing things in tests can be uncomfortable, the -script will use a random seed based on year + month, so it will remain the -same for each calendar month. Convenient, as if you rerun the same test with -the same `--shallow` value it will run the same random tests. +script uses a random seed based on year + month, so it remains the same for +each calendar month. Convenient, as if you rerun the same test with the same +`--shallow` value it runs the same random tests. You can force a different seed with runtests' `--seed` option. diff --git a/internals/tests/valgrind.md b/internals/tests/valgrind.md index a504ae99fe..38b71eb467 100644 --- a/internals/tests/valgrind.md +++ b/internals/tests/valgrind.md @@ -4,7 +4,7 @@ Valgrind is a popular and powerful tool for debugging programs and especially their use and abuse of memory. `runtests.pl` automatically detects if valgrind is installed on your system -and will by default run tests using valgrind if found. You can pass `-n` to +and by default runs tests using valgrind if found. You can pass `-n` to runtests to disable the use of valgrind. Valgrind makes execution much slower, but it is an excellent tool to find diff --git a/internals/timeouts.md b/internals/timeouts.md index 1f13218414..77d1f0b587 100644 --- a/internals/timeouts.md +++ b/internals/timeouts.md @@ -42,6 +42,6 @@ libcurl again for a specific given easy handle for which the timeout has expired. There is no other special action or activity happening when a timeout expires -than that the perform function will be called. Each state or internal function +than that the perform function is called. Each state or internal function needs to know what times or states to check for and act accordingly when called (again). diff --git a/libcurl-http/alt-svc.md b/libcurl-http/alt-svc.md index 3da85c2de2..605496db52 100644 --- a/libcurl-http/alt-svc.md +++ b/libcurl-http/alt-svc.md @@ -24,12 +24,11 @@ Tell libcurl to use a specific alt-svc cache file like this: curl_easy_setopt(curl, CURLOPT_ALTSVC, "altsvc-cache.txt"); -libcurl holds the list of alternatives in a memory-based cache, but will load -all already existing alternative service entries from the alt-svc file at -start-up and consider those when doing its subsequent HTTP requests. If -servers responds with new or updated `Alt-Svc:` headers, libcurl will store -those in the cache file at exit (unless the `CURLALTSVC_READONLYFILE` bit was -set). +libcurl holds the list of alternatives in a memory-based cache, but loads all +already existing alternative service entries from the alt-svc file at start-up +and consider those when doing its subsequent HTTP requests. If servers +responds with new or updated `Alt-Svc:` headers, libcurl stores those in the +cache file at exit (unless the `CURLALTSVC_READONLYFILE` bit was set). ## The alt-svc cache diff --git a/libcurl-http/auth.md b/libcurl-http/auth.md index 438a7a8362..eee0a8d46d 100644 --- a/libcurl-http/auth.md +++ b/libcurl-http/auth.md @@ -9,7 +9,7 @@ for details on how to do that. ## User name and password -libcurl will not try any HTTP authentication without a given user name. Set +libcurl does not try any HTTP authentication without a given user name. Set one like: curl_easy_setopt(curl, CURLOPT_USERNAME, "joe"); @@ -19,16 +19,16 @@ separately: curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); -That is all you need. This will make libcurl switch on its default -authentication method for this transfer: *HTTP Basic*. +That is all you need. This makes libcurl switch on its default authentication +method for this transfer: *HTTP Basic*. ## Authentication required A client does not itself decide that it wants to send an authenticated request. It is something the server requires. When the server has a resource -that is protected and requires authentication, it will respond with a 401 HTTP -response and a `WWW-Authenticate:` header. The header will include details -about what specific authentication methods it accepts for that resource. +that is protected and requires authentication, it responds with a 401 HTTP +response and a `WWW-Authenticate:` header. The header includes details about +what specific authentication methods it accepts for that resource. ## Basic @@ -43,7 +43,7 @@ outgoing header looks like this: Authorization: Basic am9lOnNlY3JldA== This authentication method is totally insecure over HTTP as the credentials -will then be sent in plain-text over the network. +are sent in plain-text over the network. You can explicitly tell libcurl to use Basic method for a specific transfer like this: @@ -96,10 +96,10 @@ To pass on an OAuth 2.0 Bearer Access Token in a request, use ## Try-first Some HTTP servers allow one out of several authentication methods, in some -cases you will find yourself in a position where you as a client does not want -or is not able to select a single specific method before-hand and for yet -another subset of cases your application does not know if the requested URL -even require authentication or not! +cases you find yourself in a position where you as a client does not want or +is not able to select a single specific method before-hand and for yet another +subset of cases your application does not know if the requested URL even +require authentication or not! libcurl covers all these situations as well. @@ -107,8 +107,8 @@ You can ask libcurl to use more than one method, and when doing so, you imply that curl first tries the request without any authentication at all and then based on the HTTP response coming back, it selects one of the methods that both the server and your application allow. If more than one would work, curl -will pick them in a order based on how secure the methods are considered to -be, picking the safest of the available methods. +picks them in a order based on how secure the methods are considered to be, +picking the safest of the available methods. Tell libcurl to accept multiple method by bitwise ORing them like this: diff --git a/libcurl-http/cookies.md b/libcurl-http/cookies.md index 6391f15d45..0e70496105 100644 --- a/libcurl-http/cookies.md +++ b/libcurl-http/cookies.md @@ -14,9 +14,9 @@ of cookies. ## Cookie engine When you enable the "cookie engine" for a specific easy handle, it means that -it will record incoming cookies, store them in the in-memory "cookie store" -that is associated with the easy handle and subsequently send the proper ones -back if an HTTP request is made that matches. +it records incoming cookies, stores them in the in-memory "cookie store" that +is associated with the easy handle and subsequently sends the proper ones back +if an HTTP request is made that matches. There are two ways to switch on the cookie engine: @@ -30,7 +30,7 @@ the `CURLOPT_COOKIEFILE` option: A common trick is to just specify a non-existing filename or plain "" to have it just activate the cookie engine with a blank cookie store to start with. -This option can be set multiple times and then each of the given files will be +This option can be set multiple times and then each of the given files are read. ### Enable cookie engine with writing @@ -41,7 +41,7 @@ option: curl_easy_setopt(easy, CURLOPT_COOKIEJAR, "cookies.txt"); when the easy handle is closed later with `curl_easy_cleanup()`, all known -cookies will be written to the given file. The file format is the well-known +cookies are stored in the given file. The file format is the well-known "Netscape cookie file" format that browsers also once used. ## Setting custom cookies diff --git a/libcurl-http/download.md b/libcurl-http/download.md index 6fa2afbd04..41473935c1 100644 --- a/libcurl-http/download.md +++ b/libcurl-http/download.md @@ -19,7 +19,7 @@ there is the `CURLOPT_HTTPGET` option: An HTTP transfer also includes a set of response headers. Response headers are metadata associated with the actual payload, called the response body. All -downloads will get a set of headers too, but when using libcurl you can select +downloads get a set of headers too, but when using libcurl you can select whether you want to have them downloaded (seen) or not. You can ask libcurl to pass on the headers to the same "stream" as the regular @@ -42,7 +42,7 @@ the default behaviors of the [write](../libcurl/callbacks/write.md) and fclose(file); If you only want to casually browse the headers, you may even be happy enough -with just setting verbose mode while developing as that will show both outgoing +with just setting verbose mode while developing as that shows both outgoing and incoming headers sent to stderr: curl_easy_setopt(easy, CURLOPT_VERBOSE, 1L); diff --git a/libcurl-http/headerapi.md b/libcurl-http/headerapi.md index 65e921d845..d0f7ccede3 100644 --- a/libcurl-http/headerapi.md +++ b/libcurl-http/headerapi.md @@ -4,7 +4,7 @@ libcurl offers an API for iterating over all received HTTP headers and for extracting the contents from specific ones. When returning header content, libcurl trims leading and trailing whitespace -but will not modify or change content in any other way. +but does not modify or change content in any other way. This API was made official and is provided for real starting in libcurl 7.84.0. @@ -47,10 +47,9 @@ where the different parts are separated by a single whitespace character. The two header API function calls are perfectly possible to call at any time during a transfer, both from inside and outside of callbacks. It is however -important to remember that the API will of course only return information -about the state of the headers at the exact moment it is called, which might -not be the final status if you call it while the transfer is still in -progress. +important to remember that the API only returns information about the state of +the headers at the exact moment it is called, which might not be the final +status if you call it while the transfer is still in progress. - [Header struct](headerapi/struct.md) - [Get a header](headerapi/get.md) diff --git a/libcurl-http/headerapi/get.md b/libcurl-http/headerapi/get.md index 42ebd1d290..12d43bba6d 100644 --- a/libcurl-http/headerapi/get.md +++ b/libcurl-http/headerapi/get.md @@ -11,17 +11,17 @@ This function returns information about a field with a specific **name**, and you ask the function to search for it in one or more **origins**. The **index** argument is when you want to ask for the nth occurrence of a -header; when there are more than one available. Setting **index** to 0 will of -course return the first instance - in many cases that is the only one. +header; when there are more than one available. Setting **index** to 0 returns +the first instance - in many cases that is the only one. The **request** argument tells libcurl from which request you want headers from. An application needs to pass in a pointer to a `struct curl_header *` in the -last argument, as there will be a pointer returned there when an error is not +last argument, as a pointer is returned there when an error is not returned. See [Header struct](struct.md) for details on the **out** result of a successful call. If the given name does not match any received header in the given origin, the function returns `CURLHE_MISSING` or if no headers *at all* have been received -yet it will return `CURLHE_NOHEADERS`. +yet it returns `CURLHE_NOHEADERS`. diff --git a/libcurl-http/headerapi/struct.md b/libcurl-http/headerapi/struct.md index a03a5a0890..e41c3e501c 100644 --- a/libcurl-http/headerapi/struct.md +++ b/libcurl-http/headerapi/struct.md @@ -1,9 +1,9 @@ # Header struct The header struct pointer the header API functions return, points to memory -associated with the easy handle and subsequent calls to the functions will -clobber that struct. Applications need to copy the data if they want to keep -it around. The memory used for the struct gets freed with calling +associated with the easy handle and subsequent calls to the functions clobber +that struct. Applications need to copy the data if they want to keep it +around. The memory used for the struct gets freed with calling `curl_easy_cleanup()`. ## The struct @@ -17,8 +17,8 @@ it around. The memory used for the struct gets freed with calling void *anchor; }; -**name** is the name of header. It will use the casing used for the first -instance of the header with this name. +**name** is the name of header. It uses the casing used for the first instance +of the header with this name. **value** is the content. It comes exactly as delivered over the network but with leading and trailing whitespace and newlines stripped off. The data is diff --git a/libcurl-http/hsts.md b/libcurl-http/hsts.md index f8646b8ca6..ae428a595a 100644 --- a/libcurl-http/hsts.md +++ b/libcurl-http/hsts.md @@ -9,7 +9,7 @@ Here is how you use HSTS with libcurl. ## In-memory cache libcurl primarily features an in-memory cache for HSTS hosts, so that -subsequent HTTP-only requests to a host name present in the cache will get +subsequent HTTP-only requests to a host name present in the cache gets internally "redirected" to the HTTPS version. Assuming you have this feature enabled. @@ -27,5 +27,5 @@ cache is only to be read from, and not write anything back to. ## Set a HSTS cache file If you want to persist the HSTS cache on disk, then set a filename with the -`CURLOPT_HSTS` option. libcurl will read from this file at start of a transfer -and write to it (unless it was set read-only) when the easy handle is closed. +`CURLOPT_HSTS` option. libcurl reads from this file at start of a transfer and +writes to it (unless it was set read-only) when the easy handle is closed. diff --git a/libcurl-http/multiplexing.md b/libcurl-http/multiplexing.md index 213c62d376..e5ab1eaf50 100644 --- a/libcurl-http/multiplexing.md +++ b/libcurl-http/multiplexing.md @@ -17,11 +17,11 @@ For all practical purposes and API behaviors, an application does not have to care about if multiplexing is done or not. libcurl enables multiplexing by default, but if you start multiple transfers -at the same time they will prioritize short-term speed to a connection so they -might then rather open new connections than waiting for a connection to get -created by another transfer to be able to multiplex over. To tell libcurl to -prioritize multiplexing, set the `CURLOPT_PIPEWAIT` option for the transfer -with `curl_easy_setopt()`. +at the same time they prioritize short-term speed so they might then open new +connections rather than waiting for a connection to get created by another +transfer to be able to multiplex over. To tell libcurl to prioritize +multiplexing, set the `CURLOPT_PIPEWAIT` option for the transfer with +`curl_easy_setopt()`. With `curl_multi_setopt()`'s option `CURLMOPT_PIPELINING`, you can disable multiplexing for a specific multi handle. diff --git a/libcurl-http/ranges.md b/libcurl-http/ranges.md index a3f19b8ed9..05176d1810 100644 --- a/libcurl-http/ranges.md +++ b/libcurl-http/ranges.md @@ -1,19 +1,19 @@ # Ranges -What if the client only wants the first 200 bytes out of a remote -resource or perhaps 300 bytes somewhere in the middle? The HTTP protocol -allows a client to ask for only a specific data range. The client asks the -server for the specific range with a start offset and an end offset. It can even -combine things and ask for several ranges in the same request by just listing a -bunch of pieces next to each other. When a server sends back multiple -independent pieces to answer such a request, you will get them separated with -mime boundary strings and it will be up to the user application to handle that -accordingly. curl will not further separate such a response. +What if the client only wants the first 200 bytes out of a remote resource or +perhaps 300 bytes somewhere in the middle? The HTTP protocol allows a client +to ask for only a specific data range. The client asks the server for the +specific range with a start offset and an end offset. It can even combine +things and ask for several ranges in the same request by just listing a bunch +of pieces next to each other. When a server sends back multiple independent +pieces to answer such a request, you get them separated with mime boundary +strings and it is up to the user application to handle that accordingly. curl +does not further separate such a response. However, a byte range is only a request to the server. It does not have to respect the request and in many cases, like when the server automatically -generates the contents on the fly when it is being asked, it will simply refuse -to do it and it then instead respond with the full contents anyway. +generates the contents on the fly when it is being asked, it simply refuses to +do it and it then instead respond with the full contents anyway. You can make libcurl ask for a range with `CURLOPT_RANGE`. Like if you want the first 200 bytes out of something: diff --git a/libcurl-http/requests.md b/libcurl-http/requests.md index cfe7e9fa05..d578f2a23c 100644 --- a/libcurl-http/requests.md +++ b/libcurl-http/requests.md @@ -15,10 +15,10 @@ esoteric ones like DELETE, PATCH and OPTIONS. Usually when you use libcurl to set up and perform a transfer the specific request method is implied by the options you use. If you just ask for a URL, -it means the method will be `GET` while if you set for example -`CURLOPT_POSTFIELDS` that will make libcurl use the `POST` method. If you set -`CURLOPT_UPLOAD` to true, libcurl will send a `PUT` method in its HTTP request -and so on. Asking for `CURLOPT_NOBODY` will make libcurl use `HEAD`. +it means the method is `GET` while if you set for example `CURLOPT_POSTFIELDS` +that makes libcurl use the `POST` method. If you set `CURLOPT_UPLOAD` to true, +libcurl sends a `PUT` method in its HTTP request and so on. Asking for +`CURLOPT_NOBODY` makes libcurl use `HEAD`. However, sometimes those default HTTP methods are not good enough or simply not the ones you want your transfer to use. Then you can instruct libcurl to @@ -35,8 +35,8 @@ request headers, see the following section. ## Customize HTTP request headers When libcurl issues HTTP requests as part of performing the data transfers you -have asked it to, it will of course send them off with a set of HTTP headers -that are suitable for fulfilling the task given to it. +have asked it to, it sends them off with a set of HTTP headers that are +suitable for fulfilling the task given to it. If just given the URL `http://localhost/file1.txt`, libcurl sends the following request to the server: @@ -99,13 +99,13 @@ nothing to the right sight of the colon: ### Provide a header without contents As you may then have noticed in the above sections, if you try to add a header -with no contents on the right side of the colon, it will be treated as a -removal instruction and it will instead completely inhibit that header from -being sent. If you instead *truly* want to send a header with zero contents on -the right side, you need to use a special marker. You must provide the header -with a semicolon instead of a proper colon. Like `Header;`. If you want to add -a header to the outgoing HTTP request that is just `Moo:` with nothing -following the colon, you could write it like: +with no contents on the right side of the colon, it is treated as a removal +instruction and it instead completely inhibits that header from being sent. If +you instead *truly* want to send a header with zero contents on the right +side, you need to use a special marker. You must provide the header with a +semicolon instead of a proper colon. Like `Header;`. If you want to add a +header to the outgoing HTTP request that is just `Moo:` with nothing following +the colon, you could write it like: struct curl_slist *list = NULL; list = curl_slist_append(list, "Moo;"); diff --git a/libcurl-http/responses.md b/libcurl-http/responses.md index c366942d91..f1363fefcf 100644 --- a/libcurl-http/responses.md +++ b/libcurl-http/responses.md @@ -2,12 +2,11 @@ Every HTTP request includes an HTTP response. An HTTP response is a set of metadata and a response body, where the body can occasionally be zero bytes -and thus nonexistent. An HTTP response will however always have response -headers. +and thus nonexistent. An HTTP response however always has response headers. ## Response body -The response body will be passed to the [write callback](../libcurl/callbacks/write.md) +The response body is passed to the [write callback](../libcurl/callbacks/write.md) and the response headers to the [header callback](../libcurl/callbacks/header.md). Virtually all libcurl-using applications need to set at least one of those @@ -26,9 +25,9 @@ a response *as told by the server headers* can be extracted with curl_easy_getinfo(curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD_T, &size); If you can wait until after the transfer is already done, which also is a more -reliable way since not all URLs will provide the size up front (like for -example for servers that generate content on demand) you can instead ask for -the amount of downloaded data in the most recent transfer. +reliable way since not all URLs provide the size up front (like for example +for servers that generate content on demand) you can instead ask for the +amount of downloaded data in the most recent transfer. curl_off_t size; curl_easy_getinfo(curl, CURLINFO_SIZE_DOWNLOAD_T, &size); @@ -55,17 +54,17 @@ You can extract the response code after a transfer like this ## About HTTP response code "errors" -While the response code numbers can include numbers (in the 4xx and 5xx ranges) -which the server uses to signal that there was an error processing the request, -it is important to realize that this will not cause libcurl to return an +While the response code numbers can include numbers (in the 4xx and 5xx +ranges) which the server uses to signal that there was an error processing the +request, it is important to realize that this does not make libcurl return an error. -When libcurl is asked to perform an HTTP transfer it will return an error if -that HTTP transfer fails. However, getting an HTTP 404 or the like back is not -a problem for libcurl. It is not an HTTP transfer error. A user might be -writing a client for testing a server's HTTP responses. +When libcurl is asked to perform an HTTP transfer it returns an error if that +HTTP transfer fails. However, getting an HTTP 404 or the like back is not a +problem for libcurl. It is not an HTTP transfer error. A user might be writing +a client for testing a server's HTTP responses. If you insist on curl treating HTTP response codes from 400 and up as errors, libcurl offers the `CURLOPT_FAILONERROR` option that if set instructs curl to -return `CURLE_HTTP_RETURNED_ERROR` in this case. It will then return error as -soon as possible and not deliver the response body. +return `CURLE_HTTP_RETURNED_ERROR` in this case. It then returns error as soon +as possible and does not deliver the response body. diff --git a/libcurl-http/upload.md b/libcurl-http/upload.md index 2484678f38..8d2f03abe3 100644 --- a/libcurl-http/upload.md +++ b/libcurl-http/upload.md @@ -23,7 +23,7 @@ get the data by using the regular [read callback](../libcurl/callbacks/read.md): curl_easy_setopt(easy, CURLOPT_POST, 1L); curl_easy_setopt(easy, CURLOPT_READFUNCTION, read_callback); -This "normal" POST will also set the request header `Content-Type: +This "normal" POST also sets the request header `Content-Type: application/x-www-form-urlencoded`. ## HTTP multipart formposts @@ -59,11 +59,10 @@ longer recommend using that) ## HTTP PUT -A PUT with libcurl will assume you pass the data to it using the read -callback, as that is the typical "file upload" pattern libcurl uses and -provides. You set the callback, you ask for PUT (by asking for -`CURLOPT_UPLOAD`), you set the size of the upload and you set the URL to the -destination: +A PUT with libcurl assumes you pass the data to it using the read callback, as +that is the typical "file upload" pattern libcurl uses and provides. You set +the callback, you ask for PUT (by asking for `CURLOPT_UPLOAD`), you set the +size of the upload and you set the URL to the destination: curl_easy_setopt(easy, CURLOPT_UPLOAD, 1L); curl_easy_setopt(easy, CURLOPT_INFILESIZE_LARGE, (curl_off_t) size); @@ -78,7 +77,7 @@ header is needed. ## Expect: headers -When doing HTTP uploads using HTTP 1.1, libcurl will insert an `Expect: +When doing HTTP uploads using HTTP 1.1, libcurl inserts an `Expect: 100-continue` header in some circumstances. This header offers the server a way to reject the transfer early and save the client from having to send a lot of data in vain before the server gets a chance to decline. diff --git a/libcurl-http/versions.md b/libcurl-http/versions.md index 432cb72122..5467edb214 100644 --- a/libcurl-http/versions.md +++ b/libcurl-http/versions.md @@ -32,8 +32,8 @@ If the default behavior is not good enough for your transfer, the ## Version 2 not mandatory When asking libcurl to use HTTP/2, it is an ask not a requirement. libcurl -will then allow the server to select to use HTTP/1.1 or HTTP/2 and that is -what decides which protocol that is ultimately used. +then allows the server to select to use HTTP/1.1 or HTTP/2 and that is what +decides which protocol that is ultimately used. ## Version 3 can be mandatory @@ -43,4 +43,4 @@ so that if the HTTP/3 connection fails, it can still try and use an older HTTP version. Using `CURL_HTTP_VERSION_3ONLY` means that the fallback mechanism is not used -and a failed QUIC connection will fail the transfer completely. +and a failed QUIC connection fails the transfer completely. diff --git a/libcurl.md b/libcurl.md index 8ee0604ffd..9103be6cdf 100644 --- a/libcurl.md +++ b/libcurl.md @@ -22,13 +22,13 @@ information as you can and want, and then you tell libcurl to perform that transfer. That said, networking and protocols are areas with lots of pitfalls and -special cases so the more you know about these things, the more you will be -able to understand about libcurl's options and ways of working. Not to -mention, such knowledge is invaluable when you are debugging and need to -understand what to do next when things do not go as you intended. +special cases so the more you know about these things, the more you are able +to understand about libcurl's options and ways of working. Not to mention, +such knowledge is invaluable when you are debugging and need to understand +what to do next when things do not go as you intended. The most basic libcurl using application can be as small as just a couple of -lines of code, but most applications will, of course, need more code than that. +lines of code, but most applications do, of course, need more code than that. ## Simple by default, more on demand @@ -37,6 +37,6 @@ want to add more advanced features, you add that by setting the correct options. For example, libcurl does not support HTTP cookies by default but it does once you tell it. -This makes libcurl's behaviors easier to guess and depend on, and also it makes -it easier to maintain old behavior and add new features. Only applications -that actually ask for and use the new features will get that behavior. +This makes libcurl's behaviors easier to guess and depend on, and also it +makes it easier to maintain old behavior and add new features. Only +applications that actually ask for and use the new features get that behavior. diff --git a/libcurl/--libcurl.md b/libcurl/--libcurl.md index 1d7b8fef57..58bc52c8a2 100644 --- a/libcurl/--libcurl.md +++ b/libcurl/--libcurl.md @@ -5,15 +5,15 @@ the curl command-line tool, and once it works roughly the way you want it to, you append the `--libcurl [filename]` option to the command line and run it again. -The `--libcurl` command-line option will create a C program in the provided -file name. That C program is an application that uses libcurl to run the -transfer you just had the curl command-line tool do. There are some exceptions -and it is not always a 100% match, but you will find that it can serve as an +The `--libcurl` command-line option creates a C program in the provided file +name. That C program is an application that uses libcurl to run the transfer +you just had the curl command-line tool do. There are some exceptions and it +is not always a 100% match, but you might find that it can serve as an excellent inspiration source for what libcurl options you want or can use and what additional arguments to provide to them. -If you specify the filename as a single dash, as in `--libcurl -` you will get -the program written to stdout instead of a file. +If you specify the filename as a single dash, as in `--libcurl -` you get the +program written to stdout instead of a file. As an example, we run a command to get `http://example.com`: diff --git a/libcurl/api.md b/libcurl/api.md index 4cb86e12f5..603be525e1 100644 --- a/libcurl/api.md +++ b/libcurl/api.md @@ -1,7 +1,7 @@ # API compatibility libcurl promises API stability and guarantees that your program written today -will remain working in the future. We do not break compatibility. +remains working in the future. We do not break compatibility. Over time, we add features, new options and new functions to the APIs but we do not change behavior in a non-compatible way or remove functions. @@ -24,8 +24,8 @@ The version numbering is always built up using the same system: ## Bumping numbers -One of these X.Y.Z numbers will get bumped in every new release. The numbers to -the right of a bumped number will be reset to zero. +One of these X.Y.Z numbers gets bumped in every new release. The numbers to +the right of a bumped number are reset to zero. The main version number X is bumped when *really* big, world colliding changes are made. The release number Y is bumped when changes are performed or @@ -67,8 +67,8 @@ This number is also available as three separate defines: `LIBCURL_VERSION_MAJOR`, `LIBCURL_VERSION_MINOR` and `LIBCURL_VERSION_PATCH`. These defines are, of course, only suitable to figure out the version number -built *just now* and they will not help you figuring out which libcurl version -that is used at runtime three years from now. +built *just now* and do not help you figuring out which libcurl version that +is used at runtime three years from now. ## Which libcurl version runs @@ -91,8 +91,8 @@ You call the function like this: curl_version_info_data *version = curl_version_info( CURLVERSION_NOW ); -The data will then be pointing at struct that has or at least can have the -following layout: +The data then points to struct that has or at least can have the following +layout: struct { CURLversion age; /* see description below */ diff --git a/libcurl/caches.md b/libcurl/caches.md index 05e202ec14..593a7636b6 100644 --- a/libcurl/caches.md +++ b/libcurl/caches.md @@ -13,8 +13,8 @@ You can instruct libcurl to share some of the caches with the ## DNS cache When libcurl resolves a host name to one or more IP addresses, that is stored -in the DNS cache so that subsequent transfers in the near term will not have -to redo the same resolve again. A name resolve can easily take several hundred +in the DNS cache so that subsequent transfers in the near term do not have to +redo the same resolve again. A name resolve can easily take several hundred milliseconds and sometimes even much longer. By default, each such host name is stored in the cache for 60 seconds @@ -37,11 +37,11 @@ A reused connection usually saves having to a DNS lookup, setting up a TCP connection, do a TLS handshake and more. Connections are only reused if the name is identical. Even if two different -host names resolve to the same IP addresses, they will still always use two +host names resolve to the same IP addresses, they still always use two separate connections with libcurl. Since the connection reuse is based on the host name and the DNS resolve phase -is entirely skipped when a connection is reused for a transfer, libcurl will +is entirely skipped when a connection is reused for a transfer, libcurl does not know the current state of the host name in DNS as it can in fact change IP over time while the connection might survive and continue to get reused over the original IP address. diff --git a/libcurl/callbacks.md b/libcurl/callbacks.md index f6fec1e3e9..3dc95f6374 100644 --- a/libcurl/callbacks.md +++ b/libcurl/callbacks.md @@ -6,8 +6,8 @@ at some point to get a particular job done. Each callback has its specific documented purpose and it requires that you write it with the exact function prototype to accept the correct arguments and -return the documented return code and return value so that libcurl will -perform the way you want it to. +return the documented return code and return value so that libcurl performs +the way you want it to. Each callback option also has a companion option that sets the associated "user pointer". This user pointer is a pointer that libcurl does not touch or diff --git a/libcurl/callbacks/debug.md b/libcurl/callbacks/debug.md index b58fc2bcc1..f34741abd2 100644 --- a/libcurl/callbacks/debug.md +++ b/libcurl/callbacks/debug.md @@ -13,10 +13,9 @@ The `debug_callback` function must match this prototype: void *userdata); This callback function replaces the default verbose output function in the -library and will get called for all debug and trace messages to aid -applications to understand what's going on. The *type* argument explains what -sort of data that is provided: header, data or SSL data and in which direction -it flows. +library and gets called for all debug and trace messages to aid applications +to understand what's going on. The *type* argument explains what sort of data +that is provided: header, data or SSL data and in which direction it flows. A common use for this callback is to get a full trace of all data that libcurl sends and receives. The data sent to this callback is always the unencrypted diff --git a/libcurl/callbacks/header.md b/libcurl/callbacks/header.md index f2b146cb3c..d713784916 100644 --- a/libcurl/callbacks/header.md +++ b/libcurl/callbacks/header.md @@ -13,14 +13,14 @@ received. *ptr* points to the delivered data, and the size of that data is *size* multiplied with *nmemb*. libcurl buffers headers and delivers only "full" headers, one by one, to this callback. -The data passed to this function will not be zero terminated! You cannot, for +The data passed to this function is not be zero terminated! You cannot, for example, use printf's `%s` operator to display the contents nor strcpy to copy it. This callback should return the number of bytes actually taken care of. If that number differs from the number passed to your callback function, it -signals an error condition to the library. This will cause the transfer to -abort and the libcurl function used will return `CURLE_WRITE_ERROR`. +signals an error condition to the library. This causes the transfer to abort +and the libcurl function used returns `CURLE_WRITE_ERROR`. The user pointer passed in to the callback in the *userdata* argument is set with `CURLOPT_HEADERDATA`: diff --git a/libcurl/callbacks/openclosesocket.md b/libcurl/callbacks/openclosesocket.md index b4c8d11256..b99c480003 100644 --- a/libcurl/callbacks/openclosesocket.md +++ b/libcurl/callbacks/openclosesocket.md @@ -1,7 +1,7 @@ # Opensocket and closesocket Occasionally you end up in a situation where you want your application to -control with more precision exactly what socket libcurl will use for its +control with more precision exactly what socket libcurl uses for its operations. libcurl offers this pair of callbacks that replaces libcurl's own call to `socket()` and the subsequent `close()` of the same file descriptor. @@ -39,7 +39,7 @@ address in that struct, if you would like to offer some sort of network filter or translation layer. The callback should return a file descriptor or `CURL_SOCKET_BAD`, which then -will cause an unrecoverable error within libcurl and it will eventually return +causes an unrecoverable error within libcurl and it returns `CURLE_COULDNT_CONNECT` from its perform function. If you want to return a file descriptor that is *already connected* to a diff --git a/libcurl/callbacks/prereq.md b/libcurl/callbacks/prereq.md index 535a938db9..a7255512c4 100644 --- a/libcurl/callbacks/prereq.md +++ b/libcurl/callbacks/prereq.md @@ -3,7 +3,7 @@ "Prereq" here means immediately before the request is issued. That's the moment where this callback is called. -Set the function with `CURLOPT_PREREQFUNCTION` and it will be called and pass -on IP address and port numbers in the arguments. This allows the application -to know about the transfer just before it starts and also allows it to cancel -this particular transfer should it want to. +Set the function with `CURLOPT_PREREQFUNCTION` and it gets called and passed +on the used IP address and port numbers in the arguments. This allows the +application to know about the transfer just before it starts and also allows +it to cancel this particular transfer should it want to. diff --git a/libcurl/callbacks/progress.md b/libcurl/callbacks/progress.md index 95b7bde9c0..d762fb9813 100644 --- a/libcurl/callbacks/progress.md +++ b/libcurl/callbacks/progress.md @@ -14,8 +14,8 @@ The `xfer_callback` function must match this prototype: If this option is set and `CURLOPT_NOPROGRESS` is set to 0 (zero), this callback function gets called by libcurl with a frequent interval. While data -is being transferred it will be called frequently, and during slow periods -like when nothing is being transferred it can slow down to about one call per +is being transferred it gets called frequently, and during slow periods like +when nothing is being transferred it can slow down to about one call per second. The **clientp** pointer points to the private data set with @@ -23,7 +23,7 @@ The **clientp** pointer points to the private data set with curl_easy_setopt(handle, CURLOPT_XFERINFODATA, custom_pointer); -The callback gets told how much data libcurl will transfer and has +The callback gets told how much data libcurl is about to transfer and has transferred, in number of bytes: - `dltotal` is the total number of bytes libcurl expects to download in @@ -33,17 +33,17 @@ transferred, in number of bytes: transfer. - `ulnow` is the number of bytes uploaded so far. -Unknown/unused argument values passed to the callback will be set to zero -(like if you only download data, the upload size will remain 0). Many times -the callback will be called one or more times first, before it knows the data -sizes, so a program must be made to handle that. +Unknown/unused argument values passed to the callback are set to zero (like if +you only download data, the upload size remains zero). Many times the callback +is called one or more times first, before it knows the data sizes, so a +program must be made to handle that. -Returning a non-zero value from this callback will cause libcurl to abort the +Returning a non-zero value from this callback causes libcurl to abort the transfer and return `CURLE_ABORTED_BY_CALLBACK`. -If you transfer data with the multi interface, this function will not be -called during periods of idleness unless you call the appropriate libcurl -function that performs transfers. +If you transfer data with the multi interface, this function is not called +during periods of idleness unless you call the appropriate libcurl function +that performs transfers. (The deprecated callback `CURLOPT_PROGRESSFUNCTION` worked identically but instead of taking arguments of type `curl_off_t`, it used `double`.) diff --git a/libcurl/callbacks/read.md b/libcurl/callbacks/read.md index ac7091aece..9f7b35914e 100644 --- a/libcurl/callbacks/read.md +++ b/libcurl/callbacks/read.md @@ -10,8 +10,8 @@ The `read_callback` function must match this prototype: This callback function gets called by libcurl when it wants to send data to the server. This is a transfer that you have set up to upload data or -otherwise send it off to the server. This callback will be called over and -over until all data has been delivered or the transfer failed. +otherwise send it off to the server. This callback is called over and over +until all data has been delivered or the transfer failed. The **stream** pointer points to the private data set with `CURLOPT_READDATA`: diff --git a/libcurl/callbacks/rtsp.md b/libcurl/callbacks/rtsp.md index 51d73b62ac..b645ccf2dc 100644 --- a/libcurl/callbacks/rtsp.md +++ b/libcurl/callbacks/rtsp.md @@ -9,8 +9,8 @@ packet). libcurl writes the interleaved header as well as the included data for each call. The first byte is always an ASCII dollar sign. The dollar sign is followed by a one byte channel identifier and then a 2 byte integer length in network byte order. See RFC2326 Section 10.12 for more information on how -RTP interleaving behaves. If unset or set to NULL, curl will use the default -write function. +RTP interleaving behaves. If unset or set to NULL, curl uses the default write +function. The `CURLOPT_INTERLEAVEDATA` pointer is passed in the userdata argument in the callback. diff --git a/libcurl/callbacks/sshkey.md b/libcurl/callbacks/sshkey.md index b728365724..af05c41414 100644 --- a/libcurl/callbacks/sshkey.md +++ b/libcurl/callbacks/sshkey.md @@ -3,32 +3,32 @@ This callback is set with `CURLOPT_SSH_KEYFUNCTION`. It gets called when the `known_host` matching has been done, to allow the -application to act and decide for libcurl how to proceed. The callback will -only be called if `CURLOPT_SSH_KNOWNHOSTS` is also set. +application to act and decide for libcurl how to proceed. The callback is +called if `CURLOPT_SSH_KNOWNHOSTS` is also set. In the arguments to the callback are the old key and the new key and the callback is expected to return a return code that tells libcurl how to act: -`CURLKHSTAT_FINE_REPLACE` - The new host+key is accepted and libcurl will -replace the old host+key into the known_hosts file before continuing with the -connection. This will also add the new host+key combo to the known_host pool -kept in memory if it was not already present there. The adding of data to the -file is done by completely replacing the file with a new copy, so the -permissions of the file must allow this. - -`CURLKHSTAT_FINE_ADD_TO_FILE` - The host+key is accepted and libcurl will -append it to the known_hosts file before continuing with the connection. This -will also add the host+key combo to the known_host pool kept in memory if it -was not already present there. The adding of data to the file is done by -completely replacing the file with a new copy, so the permissions of the file -must allow this. - -`CURLKHSTAT_FINE` - The host+key is accepted libcurl will continue with the -connection. This will also add the host+key combo to the known_host pool kept -in memory if it was not already present there. - -`CURLKHSTAT_REJECT` - The host+key is rejected. libcurl will deny the -connection to continue and it will be closed. +`CURLKHSTAT_FINE_REPLACE` - The new host+key is accepted and libcurl replaces +the old host+key into the known_hosts file before continuing with the +connection. This also adds the new host+key combo to the known_host pool kept +in memory if it was not already present there. The adding of data to the file +is done by completely replacing the file with a new copy, so the permissions +of the file must allow this. + +`CURLKHSTAT_FINE_ADD_TO_FILE` - The host+key is accepted and libcurl appends +it to the known_hosts file before continuing with the connection. This also +adds the host+key combo to the known_host pool kept in memory if it was not +already present there. The adding of data to the file is done by completely +replacing the file with a new copy, so the permissions of the file must allow +this. + +`CURLKHSTAT_FINE` - The host+key is accepted libcurl continues with the +connection. This also adds the host+key combo to the known_host pool kept in +memory if it was not already present there. + +`CURLKHSTAT_REJECT` - The host+key is rejected. libcurl denies the connection +to continue and it closes. `CURLKHSTAT_DEFER` - The host+key is rejected, but the SSH connection is asked to be kept alive. This feature could be used when the app wants to somehow diff --git a/libcurl/callbacks/sslcontext.md b/libcurl/callbacks/sslcontext.md index 201ab51c98..52396cbc57 100644 --- a/libcurl/callbacks/sslcontext.md +++ b/libcurl/callbacks/sslcontext.md @@ -11,10 +11,10 @@ chance to an application to modify the behavior of the TLS initialization. The `ssl_ctx parameter` passed to the callback in the second argument is actually a pointer to the SSL library's `SSL_CTX` for OpenSSL or wolfSSL, and a pointer to `mbedtls_ssl_config` for mbedTLS. If an error is returned from the callback -no attempt to establish a connection is made and the operation will return the +no attempt to establish a connection is made and the operation returns the callback's error code. Set the `userptr` argument with the `CURLOPT_SSL_CTX_DATA` option. This function gets called on all new connections made to a server, during the -TLS negotiation. The TLS context will point to a newly initialized object each +TLS negotiation. The TLS context points to a newly initialized object each time. diff --git a/libcurl/callbacks/trailers.md b/libcurl/callbacks/trailers.md index d61a79d17e..f06c881b62 100644 --- a/libcurl/callbacks/trailers.md +++ b/libcurl/callbacks/trailers.md @@ -5,6 +5,6 @@ a transfer*. This callback is used for when you want to send trailers with curl after an upload has been performed. An upload in the form of a chunked encoded POST. -The callback set with `CURLOPT_TRAILERFUNCTION` will be called and the -function can then append headers to a list. One or many. When done, libcurl -sends off those as trailers to the server. +The callback set with `CURLOPT_TRAILERFUNCTION` is called and the function can +then append headers to a list. One or many. When done, libcurl sends off those +as trailers to the server. diff --git a/libcurl/callbacks/write.md b/libcurl/callbacks/write.md index 7617a7bfcf..eda4f0fb86 100644 --- a/libcurl/callbacks/write.md +++ b/libcurl/callbacks/write.md @@ -14,25 +14,24 @@ size of that data is *size* multiplied with *nmemb*. If this callback is not set, libcurl instead uses 'fwrite' by default. -The write callback will be passed as much data as possible in all invokes, but -it must not make any assumptions. It may be one byte, it may be thousands. -The maximum amount of body data that will be passed to the write callback is -defined in the curl.h header file: `CURL_MAX_WRITE_SIZE` (the usual default is -16KB). If `CURLOPT_HEADER` is enabled for this transfer, which makes header -data get passed to the write callback, you can get up to -`CURL_MAX_HTTP_HEADER` bytes of header data passed into it. This usually means -100KB. +The write callback is passed as much data as possible in all invokes, but it +must not make any assumptions. It may be one byte, it may be thousands. The +maximum amount of body data that is passed to the write callback is defined in +the curl.h header file: `CURL_MAX_WRITE_SIZE` (the usual default is 16KB). If +`CURLOPT_HEADER` is enabled for this transfer, which makes header data get +passed to the write callback, you can get up to `CURL_MAX_HTTP_HEADER` bytes +of header data passed into it. This usually means 100KB. This function may be called with zero bytes data if the transferred file is empty. -The data passed to this function will not be zero terminated. You cannot, for +The data passed to this function is not be zero terminated. You cannot, for example, use printf's `%s` operator to display the contents nor strcpy to copy it. This callback should return the number of bytes actually taken care of. If -that number differs from the number passed to your callback function, it will -signal an error condition to the library. This will cause the transfer to get -aborted and the libcurl function used will return `CURLE_WRITE_ERROR`. +that number differs from the number passed to your callback function, it +signals an error condition to the library. This causes the transfer to get +aborted and the libcurl function used returns `CURLE_WRITE_ERROR`. The user pointer passed in to the callback in the *userdata* argument is set with `CURLOPT_WRITEDATA`: diff --git a/libcurl/cleanup.md b/libcurl/cleanup.md index 2dfe401896..e74b374a0b 100644 --- a/libcurl/cleanup.md +++ b/libcurl/cleanup.md @@ -1,8 +1,8 @@ # Cleanup In previous sections we have discussed how to setup handles and how to drive -the transfers. All transfers will, of course, end up at some point, either -successfully or with a failure. +the transfers. All transfers end up at some point, either successfully or with +a failure. ## Multi API diff --git a/libcurl/conn/names.md b/libcurl/conn/names.md index 2fb878cb34..bcf984e192 100644 --- a/libcurl/conn/names.md +++ b/libcurl/conn/names.md @@ -5,12 +5,11 @@ translated to an Internet address. That is "name resolving". Using a numerical IP address directly in the URL usually avoids the name resolve phase, but in many cases it is not easy to manually replace the name with the IP address. -libcurl tries hard to [re-use an existing connection](reuse.md) -rather than to create a new one. The function that checks for an existing -connection to use is based purely on the name and is performed before any name -resolving is attempted. That is one of the reasons the re-use is so much -faster. A transfer using a reused connection will not resolve the host name -again. +libcurl tries hard to [re-use an existing connection](reuse.md) rather than to +create a new one. The function that checks for an existing connection to use +is based purely on the name and is performed before any name resolving is +attempted. That is one of the reasons the re-use is so much faster. A transfer +using a reused connection does not resolve the host name again. If no connection can be reused, libcurl resolves the host name to the set of addresses it resolves to. Typically this means asking for both IPv4 and IPv6 @@ -31,7 +30,7 @@ different feature set and sometimes modified behavior. 1. The default backend is invoking the "normal" libc resolver functions in a new helper-thread, so that it can still do fine-grained timeouts if wanted and -there will be no blocking calls involved. +there is no blocking calls involved. 2. On older systems, libcurl uses the standard synchronous name resolver functions. They unfortunately make all transfers within a multi handle block @@ -46,8 +45,8 @@ compatible with the native name resolver functionality. Independently of what resolver backend that libcurl is built to use, since 7.62.0 it also provides a way for the user to ask a specific DoH (DNS over -HTTPS) server for the address of a name. This will avoid using the normal, -native resolver method and server and instead asks a dedicated separate one. +HTTPS) server for the address of a name. This avoids using the normal, native +resolver method and server and instead asks a dedicated separate one. A DoH server is specified as a full URL with the `CURLOPT_DOH_URL` option like this: @@ -60,11 +59,10 @@ multiple DoH requests multiplexed over the connection to the DoH server. ## Caching -When a name has been resolved, the result will be put in libcurl's in-memory -cache so that subsequent resolves of the same name will be near instant for as -long the name is kept in the DNS cache. By default, each entry is kept in the -cache for 60 seconds, but that value can be changed with -`CURLOPT_DNS_CACHE_TIMEOUT`. +When a name has been resolved, the result is stored in libcurl's in-memory +cache so that subsequent resolves of the same name are instant for as long the +name is kept in the DNS cache. By default, each entry is kept in the cache for +60 seconds, but that value can be changed with `CURLOPT_DNS_CACHE_TIMEOUT`. The DNS cache is kept within the easy handle when `curl_easy_perform` is used, or within the multi handle when the multi interface is used. It can also be @@ -73,8 +71,8 @@ made shared between multiple easy handles using the [share interface](../sharing ## Custom addresses for hosts Sometimes it is handy to provide "fake" addresses to real host names so that -libcurl will connect to a different address instead of one an actual name -resolve would suggest. +libcurl connects to a different address instead of one an actual name resolve +would suggest. With the help of the [CURLOPT_RESOLVE](https://curl.se/libcurl/c/CURLOPT_RESOLVE.html) option, @@ -88,7 +86,7 @@ requested, an application can do: dns = curl_slist_append(NULL, "example.com:443:127.0.0.1"); curl_easy_setopt(curl, CURLOPT_RESOLVE, dns); -Since this puts the "fake" address into the DNS cache, it will work even when +Since this puts the "fake" address into the DNS cache, it works even when following redirects etc. ## Name server options diff --git a/libcurl/conn/proxies.md b/libcurl/conn/proxies.md index 3dc382ad6e..afb14932da 100644 --- a/libcurl/conn/proxies.md +++ b/libcurl/conn/proxies.md @@ -14,8 +14,8 @@ the proxy is not necessarily the same protocol used to the remote server. When setting up a transfer with libcurl you need to point out the server name and port number of the proxy. You may find that your favorite browsers can do -this in slightly more advanced ways than libcurl can, and we will get into -such details in later sections. +this in slightly more advanced ways than libcurl can, and we get into such +details in later sections. ## Proxy types @@ -75,9 +75,9 @@ other means but none of those are recognized by libcurl. ### Proxy environment variables -If no proxy option has been set, libcurl will check for the existence of -specially named environment variables before it performs its transfer to see -if a proxy is requested to get used. +If no proxy option has been set, libcurl checks for the existence of specially +named environment variables before it performs its transfer to see if a proxy +is requested to get used. You can specify the proxy by setting a variable named `[scheme]_proxy` to hold the proxy host name (the same way you would specify the host with `-x`). If @@ -92,7 +92,7 @@ these proxy environment variable names except http_proxy can also be specified in uppercase, like `HTTPS_PROXY`. To set a single variable that controls *all* protocols, the `ALL_PROXY` -exists. If a specific protocol variable one exists, such a one will take +exists. If a specific protocol variable one exists, such a one takes precedence. When using environment variables to set a proxy, you could easily end up in a @@ -109,11 +109,10 @@ sending the request to the actual remote server, the client (libcurl) instead asks the proxy for the specific resource. The connection to the HTTP proxy is made using plain unencrypted HTTP. -If an HTTPS resource is requested, libcurl will instead issue a `CONNECT` -request to the proxy. Such a request opens a tunnel through the proxy, where -it passes data through without understanding it. This way, libcurl can -establish a secure end-to-end TLS connection even when an HTTP proxy is -present. +If an HTTPS resource is requested, libcurl instead issues a `CONNECT` request +to the proxy. Such a request opens a tunnel through the proxy, where it passes +data through without understanding it. This way, libcurl can establish a +secure end-to-end TLS connection even when an HTTP proxy is present. You *can* proxy non-HTTP protocols over an HTTP proxy, but since this is mostly done by the CONNECT method to tunnel data through it requires that the @@ -126,9 +125,9 @@ other port numbers than 80 and 443. An HTTPS proxy is similar to an HTTP proxy but allows the client to connect to it using a secure HTTPS connection. Since the proxy connection is separate from the connection to the remote site even in this situation, as HTTPS to the -remote site will be tunneled through the HTTPS connection to the proxy, -libcurl provides a whole set of TLS options for the proxy connection that are -separate from the connection to the remote host. +remote site is tunneled through the HTTPS connection to the proxy, libcurl +provides a whole set of TLS options for the proxy connection that are separate +from the connection to the remote host. For example, `CURLOPT_PROXY_CAINFO` is the same functionality for the HTTPS proxy as `CURLOPT_CAINFO` is for the remote @@ -150,7 +149,7 @@ use - unless you set it within the `CURLOPT_PROXY` string. ## HTTP Proxy headers -With an HTTP or HTTP proxy, libcurl will issue a request to the proxy that +With an HTTP or HTTP proxy, libcurl issues a request to the proxy that includes a set of headers. An application can of course modify the headers, just like for requests sent to servers. diff --git a/libcurl/conn/reuse.md b/libcurl/conn/reuse.md index 6ed7e3757d..0977104173 100644 --- a/libcurl/conn/reuse.md +++ b/libcurl/conn/reuse.md @@ -1,25 +1,25 @@ # Connection reuse libcurl keeps a pool of old connections alive. When one transfer has completed -it will keep N connections alive in a "connection pool" (sometimes also called +it keeps N connections alive in a "connection pool" (sometimes also called connection cache) so that a subsequent transfer that happens to be able to reuse one of the existing connections can use it instead of creating a new one. Reusing a connection instead of creating a new one offers significant benefits in speed and required resources. When libcurl is about to make a new connection for the purposes of doing a -transfer, it will first check to see if there is an existing connection in the +transfer, it first checks to see if there is an existing connection in the pool that it can reuse instead. The connection re-use check is done before any DNS or other name resolving mechanism is used, so it is purely host name -based. If there is an existing live connection to the right host name, a lot of -other properties (port number, protocol, etc) are also checked to see that it -can be used. +based. If there is an existing live connection to the right host name, a lot +of other properties (port number, protocol, etc) are also checked to see that +it can be used. ## Easy API pool When you are using the easy API, or, more specifically, `curl_easy_perform()`, -libcurl will keep the pool associated with the specific easy handle. Then -reusing the same easy handle will ensure it can reuse its connection. +libcurl keeps the pool associated with the specific easy handle. Then reusing +the same easy handle ensures libcurl can reuse its connection. ## Multi API pool diff --git a/libcurl/control/stop.md b/libcurl/control/stop.md index 6ce322da37..bdcd1ee033 100644 --- a/libcurl/control/stop.md +++ b/libcurl/control/stop.md @@ -12,7 +12,7 @@ stop. ## easy API As explained elsewhere, the `curl_easy_perform()` function is a synchronous -function call. It will do the entire transfer before it returns. +function call. It does the entire transfer before it returns. There are a few different ways to stop a transfer before it would otherwise end: diff --git a/libcurl/curlcode.md b/libcurl/curlcode.md index fcfbc3c80d..78e357a160 100644 --- a/libcurl/curlcode.md +++ b/libcurl/curlcode.md @@ -17,8 +17,7 @@ user: Another way to get a slightly better error text in case of errors is to set the `CURLOPT_ERRORBUFFER` option to point out a buffer in your program and -then libcurl will store a related error message there before it returns an -error: +then libcurl stores a related error message there before it returns an error: char error[CURL_ERROR_SIZE]; /* needs to be at least this big */ CURLcode ret = curl_easy_setopt(handle, CURLOPT_ERRORBUFFER, error); diff --git a/libcurl/drive.md b/libcurl/drive.md index e19a42f87b..315f6d62f9 100644 --- a/libcurl/drive.md +++ b/libcurl/drive.md @@ -4,8 +4,8 @@ libcurl provides three different ways to perform the transfer. Which way to use in your case is entirely up to you and what you need. 1. The 'easy' interface lets you do a single transfer in a synchronous -fashion. libcurl will do the entire transfer and return control back to -your application when it is completed—successful or failed. +fashion. libcurl does the entire transfer and return control back to your +application when it is completed—successful or failed. 2. The 'multi' interface is for when you want to do more than one transfer at the same time, or you just want a non-blocking transfer. diff --git a/libcurl/drive/multi-socket.md b/libcurl/drive/multi-socket.md index 56c130122b..ca3b23908f 100644 --- a/libcurl/drive/multi-socket.md +++ b/libcurl/drive/multi-socket.md @@ -68,10 +68,10 @@ application needs to implement such a function: /* set the callback in the multi handle */ curl_multi_setopt(multi_handle, CURLMOPT_SOCKETFUNCTION, socket_callback); -Using this, libcurl will set and remove sockets your application should +Using this, libcurl sets and removes sockets your application should monitor. Your application tells the underlying event-based system to wait for -the sockets. This callback will be called multiple times if there are multiple -sockets to wait for, and it will be called again when the status changes and +the sockets. This callback is called multiple times if there are multiple +sockets to wait for, and it is called again when the status changes and perhaps you should switch from waiting for a writable socket to instead wait for it to become readable. @@ -89,10 +89,10 @@ registered: ### timer_callback -The application is in control and will wait for socket activity. But even -without socket activity there will be things libcurl needs to do. Timeout -things, calling the progress callback, starting over a retry or failing a transfer that -takes too long, etc. To make that work, the application must also make sure to +The application is in control and waits for socket activity. But even without +socket activity there are things libcurl needs to do. Timeout things, calling +the progress callback, starting over a retry or failing a transfer that takes +too long, etc. To make that work, the application must also make sure to handle a single-shot timeout that libcurl sets. libcurl sets the timeout with the timer_callback @@ -110,18 +110,18 @@ libcurl sets the timeout with the timer_callback There is only one timeout for the application to handle for the entire multi handle, no matter how many individual easy handles that have been added or -transfers that are in progress. The timer callback will be updated with the +transfers that are in progress. The timer callback gets updated with the current nearest-in-time period to wait. If libcurl gets called before the -timeout expiry time because of socket activity, it may update the -timeout value again before it expires. +timeout expiry time because of socket activity, it may update the timeout +value again before it expires. When the event system of your choice eventually tells you that the timer has expired, you need to tell libcurl about it: curl_multi_socket_action(multi, CURL_SOCKET_TIMEOUT, 0, &running); -…in many cases, this will make libcurl call the timer_callback again and -set a new timeout for the next expiry period. +…in many cases, this makes libcurl call the timer_callback again and set a new +timeout for the next expiry period. ### How to start everything @@ -130,8 +130,8 @@ socket and timer callbacks in the multi handle, you are ready to start the transfer. To kick it all off, you tell libcurl it timed out (because all easy handles -start out with a short timeout) which will make libcurl call the callbacks to -set things up and from then on you can just let your event system drive: +start out with a short timeout) which make libcurl call the callbacks to set +things up and from then on you can just let your event system drive: /* all easy handles and callbacks are setup */ @@ -150,5 +150,5 @@ The 'running_handles' counter returned by `curl_multi_socket_action` holds the number of current transfers not completed. When that number reaches zero, we know there are no transfers going on. -Each time the 'running_handles' counter changes, `curl_multi_info_read()` will -return info about the specific transfers that completed. +Each time the 'running_handles' counter changes, `curl_multi_info_read()` +returns info about the specific transfers that completed. diff --git a/libcurl/drive/multi.md b/libcurl/drive/multi.md index 4ae076611b..310359106e 100644 --- a/libcurl/drive/multi.md +++ b/libcurl/drive/multi.md @@ -18,8 +18,8 @@ set there. To drive a multi interface transfer, you first need to add all the individual easy handles that should be transferred to the multi handle. You can add them to the multi handle at any point and you can remove them again whenever you -like. Removing an easy handle from a multi handle will, of course, remove the -association and that particular transfer would stop immediately. +like. Removing an easy handle from a multi handle removes the association and +that particular transfer stops immediately. Adding an easy handle to the multi handle is easy: @@ -43,8 +43,8 @@ like this: (*note that a real application would check return codes*) } while (transfers_running); The fourth argument to `curl_multi_wait`, set to 1000 in the example above, is -a timeout in milliseconds. It is the longest time the function will wait for -any activity before it returns anyway. You do not want to lock up for too long +a timeout in milliseconds. It is the longest time the function waits for any +activity before it returns anyway. You do not want to lock up for too long before calling `curl_multi_perform` again as there are timeouts, progress callbacks and more that may lose precision if you do so. @@ -90,17 +90,16 @@ Both these loops let you use one or more file descriptors of your own on which to wait, like if you read from your own sockets or a pipe or similar. And again, you can add and remove easy handles to the multi handle at any -point during the looping. Removing a handle mid-transfer will, of course, abort -that transfer. +point during the looping. Removing a handle mid-transfer aborts that transfer. ## When is a single transfer done? As the examples above show, a program can detect when an individual transfer completes by seeing that the `transfers_running` variable decreases. -It can also call `curl_multi_info_read()`, which will return a pointer to a -struct (a "message") if a transfer has ended and you can then find out the -result of that transfer using that struct. +It can also call `curl_multi_info_read()`, which returns a pointer to a struct +(a "message") if a transfer has ended and you can then find out the result of +that transfer using that struct. When you do multiple parallel transfers, more than one transfer can of course complete in the same `curl_multi_perform` invocation and then you might need diff --git a/libcurl/easyhandle.md b/libcurl/easyhandle.md index 4f65cb8409..f9e966524b 100644 --- a/libcurl/easyhandle.md +++ b/libcurl/easyhandle.md @@ -51,7 +51,7 @@ again, or call `curl_easy_reset()` on the handle. ## Reset -By calling `curl_easy_reset()`, all options for the given easy handle will be +By calling `curl_easy_reset()`, all options for the given easy handle are reset and restored to their default values. The same values the options had when the handle was initially created. The caches remain intact. @@ -60,4 +60,4 @@ when the handle was initially created. The caches remain intact. An easy handle, with all its currently set options, can be duplicated using `curl_easy_duphandle()`. It returns a copy of the handle passed in to it. -The caches and other state information will not be carried over. +The caches and other state information are not carried over. diff --git a/libcurl/examples/get.md b/libcurl/examples/get.md index 2ba6f0fcff..fd7406addb 100644 --- a/libcurl/examples/get.md +++ b/libcurl/examples/get.md @@ -3,11 +3,11 @@ This example just fetches the HTML from a given URL and sends it to stdout. Possibly the simplest libcurl program you can write. -By replacing the URL this will of course be able to get contents over other -supported protocols as well. +By replacing the URL this is able to get contents over other supported +protocols as well. Getting the output sent to stdout is a default behavior and usually not what -you actually want. Most applications will instead install a +you actually want. Most applications instead install a [write callback](../callbacks/write.md) to have receive the data that arrives. #include @@ -22,7 +22,7 @@ you actually want. Most applications will instead install a if(curl) { curl_easy_setopt(curl, CURLOPT_URL, "http://example.com/"); - /* Perform the request, res will get the return code */ + /* Perform the request, 'res' holds the return code */ res = curl_easy_perform(curl); /* Check for errors */ if(res != CURLE_OK) diff --git a/libcurl/examples/getinmem.md b/libcurl/examples/getinmem.md index 2cfbffe1de..4cc4640d4a 100644 --- a/libcurl/examples/getinmem.md +++ b/libcurl/examples/getinmem.md @@ -50,7 +50,7 @@ from that instead. struct MemoryStruct chunk; - chunk.memory = malloc(1); /* will be grown as needed by the realloc above */ + chunk.memory = malloc(1); /* grown as needed by the realloc above */ chunk.size = 0; /* no data at this point */ curl_global_init(CURL_GLOBAL_ALL); diff --git a/libcurl/examples/login.md b/libcurl/examples/login.md index 59be07a9cd..4d858289e6 100644 --- a/libcurl/examples/login.md +++ b/libcurl/examples/login.md @@ -7,14 +7,14 @@ Once logged in, the target URL can be fetched if the proper cookies are used. As many login-systems work with HTTP redirects, we ask libcurl to follow such if they arrive. -Some login forms will make it more complicated and require that you got -cookies from the page showing the login form etc, so if you need that you may -want to extend this code a little bit. - -By passing in a non-existing cookie file, this example will enable the cookie -parser so incoming cookies will be stored when the response from the login -response arrives and then the subsequent request for the resource will use -those and prove to the server that we are in fact correctly logged in. +Some login forms makes it more complicated and require that you got cookies +from the page showing the login form etc, so if you need that you may want to +extend this code a little bit. + +By passing in a non-existing cookie file, this example enables the cookie +parser so incoming cookies are stored when the response from the login +response arrives and then the subsequent request for the resource uses those +and prove to the server that we are in fact correctly logged in. #include #include diff --git a/libcurl/getinfo.md b/libcurl/getinfo.md index f3fff23964..a780eefdbf 100644 --- a/libcurl/getinfo.md +++ b/libcurl/getinfo.md @@ -6,12 +6,12 @@ is cleaned or reused for another transfer, it can be used to extract information from the previous operation. Your friend for doing this is called `curl_easy_getinfo()` and you tell it -which specific information you are interested in and it will return that to -you if it can. +which specific information you are interested in and it returns that if it +can. When you use this function, you pass in the easy handle, which information you want and a pointer to a variable to hold the answer. You must pass in a -pointer to a variable of the correct type or you risk that things will go +pointer to a variable of the correct type or you risk that things go side-ways. These information values are designed to be provided *after* the transfer is completed. @@ -33,63 +33,63 @@ If you want to extract the local port number that was used in that connection: ## Available information -| Getinfo option | Type | Description | -|-------------------------|--------|-------------| -| CURLINFO_ACTIVESOCKET | curl_socket_t | The session's active socket -| CURLINFO_APPCONNECT_TIME | double | Time from start until SSL/SSH handshake completed -| CURLINFO_APPCONNECT_TIME_T | curl_off_t | Time from start until SSL/SSH handshake completed (in microseconds) -| CURLINFO_CERTINFO | struct curl_slist * | Certificate chain -| CURLINFO_CONDITION_UNMET | long | Whether or not a time conditional was met -| CURLINFO_CONNECT_TIME | double | Time from start until remote host or proxy completed -| CURLINFO_CONNECT_TIME_T | curl_off_t | Time from start until remote host or proxy completed (in microseconds) -| CURLINFO_CONTENT_LENGTH_DOWNLOAD | double | Content length from the Content-Length header -| CURLINFO_CONTENT_LENGTH_UPLOAD | double | Upload size -| CURLINFO_CONTENT_TYPE | char * | Content type from the Content-Type header -| CURLINFO_COOKIELIST | struct curl_slist * | List of all known cookies -| CURLINFO_EFFECTIVE_METHOD | char * | Last used HTTP request method -| CURLINFO_EFFECTIVE_URL | char * | Last used URL -| CURLINFO_FILETIME | long | Remote time of the retrieved document -| CURLINFO_FTP_ENTRY_PATH | char * | The entry path after logging in to an FTP server -| CURLINFO_HEADER_SIZE | long | Number of bytes of all headers received -| CURLINFO_HTTP_CONNECTCODE | long | Last proxy CONNECT response code -| CURLINFO_HTTP_VERSION | long | The HTTP version used in the connection -| CURLINFO_HTTPAUTH_AVAIL | long | Available HTTP authentication methods (bitmask) -| CURLINFO_LASTSOCKET | long | Last socket used -| CURLINFO_LOCAL_IP | char * | Local-end IP address of last connection -| CURLINFO_LOCAL_PORT | long | Local-end port of last connection -| CURLINFO_NAMELOOKUP_TIME | double | Time from start until name resolving completed -| CURLINFO_NAMELOOKUP_TIME_T | curl_off_t | Time from start until name resolving completed (in microseconds) -| CURLINFO_NUM_CONNECTS | long | Number of new successful connections used for previous transfer -| CURLINFO_OS_ERRNO | long | The errno from the last failure to connect -| CURLINFO_PRETRANSFER_TIME | double | Time from start until just before the transfer begins -| CURLINFO_PRETRANSFER_TIME_T | curl_off_T | Time from start until just before the transfer begins (in microseconds) -| CURLINFO_PRIMARY_IP | char * | IP address of the last connection -| CURLINFO_PRIMARY_PORT | long | Port of the last connection -| CURLINFO_PRIVATE | char * | User's private data pointer -| CURLINFO_PROTOCOL | long | The protocol used for the connection -| CURLINFO_PROXY_ERROR | long | Detailed (SOCKS) proxy error if `CURLE_PROXY` was returned from the transfer -| CURLINFO_PROXY_SSL_VERIFYRESULT | long | Proxy certificate verification result -| CURLINFO_PROXYAUTH_AVAIL | long | Available HTTP proxy authentication methods -| CURLINFO_REDIRECT_COUNT | long | Total number of redirects that were followed -| CURLINFO_REDIRECT_TIME | double | Time taken for all redirect steps before the final transfer -| CURLINFO_REDIRECT_TIME_T | curl_off_t | Time taken for all redirect steps before the final transfer (in microseconds) -| CURLINFO_REDIRECT_URL | char * | URL a redirect would take you to, had you enabled redirects -| CURLINFO_REQUEST_SIZE | long | Number of bytes sent in the issued HTTP requests -| CURLINFO_RESPONSE_CODE | long | Last received response code -| CURLINFO_RETRY_AFTER | curl_off_t | The value from the response `Retry-After:` header -| CURLINFO_RTSP_CLIENT_CSEQ | long | RTSP CSeq that will next be used -| CURLINFO_RTSP_CSEQ_RECV | long | RTSP CSeq last received -| CURLINFO_RTSP_SERVER_CSEQ | long | RTSP CSeq that will next be expected -| CURLINFO_RTSP_SESSION_ID | char * | RTSP session ID -| CURLINFO_SCHEME | char * | The scheme used for the connection -| CURLINFO_SIZE_DOWNLOAD | double | Number of bytes downloaded -| CURLINFO_SIZE_UPLOAD | double | Number of bytes uploaded -| CURLINFO_SPEED_DOWNLOAD | double | Average download speed -| CURLINFO_SPEED_UPLOAD | double | Average upload speed -| CURLINFO_SSL_ENGINES | struct curl_slist * | A list of OpenSSL crypto engines -| CURLINFO_SSL_VERIFYRESULT | long | Certificate verification result -| CURLINFO_STARTTRANSFER_TIME | double | Time from start until just when the first byte is received -| CURLINFO_STARTTRANSFER_TIME_T | curl_off_t | Time from start until just when the first byte is received (in microseconds) -| CURLINFO_TLS_SSL_PTR | struct curl_slist * | TLS session info that can be used for further processing -| CURLINFO_TOTAL_TIME | double | Total time of previous transfer -| CURLINFO_TOTAL_TIME_T | curl_off_t | Total time of previous transfer (in microseconds) +| Getinfo option | Type | Description | +|------------------------------------|-----------------------|-------------------------------------------------------------------------------| +| `CURLINFO_ACTIVESOCKET` | `curl_socket_t` | The session's active socket | +| `CURLINFO_APPCONNECT_TIME` | `double` | Time from start until SSL/SSH handshake completed | +| `CURLINFO_APPCONNECT_TIME_T` | `curl_off_t` | Time from start until SSL/SSH handshake completed (in microseconds) | +| `CURLINFO_CERTINFO` | `struct curl_slist *` | Certificate chain | +| `CURLINFO_CONDITION_UNMET` | `long` | Whether or not a time conditional was met | +| `CURLINFO_CONNECT_TIME` | `double` | Time from start until remote host or proxy completed | +| `CURLINFO_CONNECT_TIME_T` | `curl_off_t` | Time from start until remote host or proxy completed (in microseconds) | +| `CURLINFO_CONTENT_LENGTH_DOWNLOAD` | `double` | Content length from the Content-Length header | +| `CURLINFO_CONTENT_LENGTH_UPLOAD` | `double` | Upload size | +| `CURLINFO_CONTENT_TYPE` | `char *` | Content type from the Content-Type header | +| `CURLINFO_COOKIELIST` | `struct curl_slist *` | List of all known cookies | +| `CURLINFO_EFFECTIVE_METHOD` | `char *` | Last used HTTP request method | +| `CURLINFO_EFFECTIVE_URL` | `char *` | Last used URL | +| `CURLINFO_FILETIME` | `long` | Remote time of the retrieved document | +| `CURLINFO_FTP_ENTRY_PATH` | `char *` | The entry path after logging in to an FTP server | +| `CURLINFO_HEADER_SIZE` | `long` | Number of bytes of all headers received | +| `CURLINFO_HTTP_CONNECTCODE` | `long` | Last proxy CONNECT response code | +| `CURLINFO_HTTP_VERSION` | `long` | The HTTP version used in the connection | +| `CURLINFO_HTTPAUTH_AVAIL` | `long` | Available HTTP authentication methods (bitmask) | +| `CURLINFO_LASTSOCKET` | `long` | Last socket used | +| `CURLINFO_LOCAL_IP` | `char *` | Local-end IP address of last connection | +| `CURLINFO_LOCAL_PORT` | `long` | Local-end port of last connection | +| `CURLINFO_NAMELOOKUP_TIME` | `double` | Time from start until name resolving completed | +| `CURLINFO_NAMELOOKUP_TIME_T` | `curl_off_t` | Time from start until name resolving completed (in microseconds) | +| `CURLINFO_NUM_CONNECTS` | `long` | Number of new successful connections used for previous transfer | +| `CURLINFO_OS_ERRNO` | `long` | The errno from the last failure to connect | +| `CURLINFO_PRETRANSFER_TIME` | `double` | Time from start until just before the transfer begins | +| `CURLINFO_PRETRANSFER_TIME_T` | `curl_off_t` | Time from start until just before the transfer begins (in microseconds) | +| `CURLINFO_PRIMARY_IP` | `char *` | IP address of the last connection | +| `CURLINFO_PRIMARY_PORT` | `long` | Port of the last connection | +| `CURLINFO_PRIVATE` | `char *` | User's private data pointer | +| `CURLINFO_PROTOCOL` | `long` | The protocol used for the connection | +| `CURLINFO_PROXY_ERROR` | `long` | Detailed (SOCKS) proxy error if `CURLE_PROXY` was returned from the transfer | +| `CURLINFO_PROXY_SSL_VERIFYRESULT` | `long` | Proxy certificate verification result | +| `CURLINFO_PROXYAUTH_AVAIL` | `long` | Available HTTP proxy authentication methods | +| `CURLINFO_REDIRECT_COUNT` | `long` | Total number of redirects that were followed | +| `CURLINFO_REDIRECT_TIME` | `double` | Time taken for all redirect steps before the final transfer | +| `CURLINFO_REDIRECT_TIME_T` | `curl_off_t` | Time taken for all redirect steps before the final transfer (in microseconds) | +| `CURLINFO_REDIRECT_URL` | `char *` | URL a redirect would take you to, had you enabled redirects | +| `CURLINFO_REQUEST_SIZE` | `long` | Number of bytes sent in the issued HTTP requests | +| `CURLINFO_RESPONSE_CODE` | `long` | Last received response code | +| `CURLINFO_RETRY_AFTER` | `curl_off_t` | The value from the response `Retry-After:` header | +| `CURLINFO_RTSP_CLIENT_CSEQ` | `long` | RTSP next expected client CSeq | +| `CURLINFO_RTSP_CSEQ_RECV` | `long` | RTSP last received | +| `CURLINFO_RTSP_SERVER_CSEQ` | `long` | RTSP next expected server CSeq | +| `CURLINFO_RTSP_SESSION_ID` | `char *` | RTSP session ID | +| `CURLINFO_SCHEME` | `char *` | The scheme used for the connection | +| `CURLINFO_SIZE_DOWNLOAD` | `double` | Number of bytes downloaded | +| `CURLINFO_SIZE_UPLOAD` | `double` | Number of bytes uploaded | +| `CURLINFO_SPEED_DOWNLOAD` | `double` | Average download speed | +| `CURLINFO_SPEED_UPLOAD` | `double` | Average upload speed | +| `CURLINFO_SSL_ENGINES` | `struct curl_slist *` | A list of OpenSSL crypto engines | +| `CURLINFO_SSL_VERIFYRESULT` | `long` | Certificate verification result | +| `CURLINFO_STARTTRANSFER_TIME` | `double` | Time from start until just when the first byte is received | +| `CURLINFO_STARTTRANSFER_TIME_T` | `curl_off_t` | Time from start until just when the first byte is received (in microseconds) | +| `CURLINFO_TLS_SSL_PTR` | `struct curl_slist *` | TLS session info that can be used for further processing | +| `CURLINFO_TOTAL_TIME` | `double` | Total time of previous transfer | +| `CURLINFO_TOTAL_TIME_T` | `curl_off_t` | Total time of previous transfer (in microseconds) | diff --git a/libcurl/options.md b/libcurl/options.md index c9eedc4c41..790e53e5ff 100644 --- a/libcurl/options.md +++ b/libcurl/options.md @@ -22,9 +22,9 @@ contents could look like: CURLcode ret = curl_easy_setopt(easy, CURLOPT_URL, "http://example.com"); -Again: this only sets the option in the handle. It will not do the actual -transfer or anything. It will just tell libcurl to copy the string and if that -works it returns OK. +Again: this only sets the option in the handle. It does not do the actual +transfer or anything. It just tells libcurl to copy the given string and if +that works it returns OK. It is, of course, good form to check the return code to see that nothing went wrong. diff --git a/libcurl/options/all.md b/libcurl/options/all.md index 40576af989..6c183d9490 100644 --- a/libcurl/options/all.md +++ b/libcurl/options/all.md @@ -1,8 +1,6 @@ # All options -This is a table of a complete list of all available options for -`curl_easy_setopt()` as of what will be present in the 7.83.0 release, April -2022. +This is a table of available options for `curl_easy_setopt()`. | Option | Purpose | |--------|---------| diff --git a/libcurl/options/info.md b/libcurl/options/info.md index f3e6d297c3..0becfafafc 100644 --- a/libcurl/options/info.md +++ b/libcurl/options/info.md @@ -65,5 +65,5 @@ like this: There is only one bit with a defined meaning in 'flags': if `CURLOT_FLAG_ALIAS` is set, it means that that option is an "alias". A name provided for backwards compatibility that is nowadays rather served by an -option with another name. If you lookup the ID for an alias, you will get the -new canonical name for that option. +option with another name. If you lookup the ID for an alias, you get the new +canonical name for that option. diff --git a/libcurl/options/tls.md b/libcurl/options/tls.md index e3e32482f1..6628ecc730 100644 --- a/libcurl/options/tls.md +++ b/libcurl/options/tls.md @@ -5,20 +5,20 @@ options for curl_easy_setopt that are dedicated for controlling how libcurl does SSL and TLS. Transfers done using TLS use safe defaults but since curl is used in many -different scenarios and setups, chances are you will end up in situations -where you want to change those behaviors. +different scenarios and setups, chances are you end up in situations where you +want to change those behaviors. ## Protocol version With `CURLOPT_SSLVERSION' and `CURLOPT_PROXY_SSLVERSION`you can specify which SSL or TLS protocol range that is acceptable to you. Traditionally SSL and TLS protocol versions have been found detect and unsuitable for use over time and -even if curl itself will raise its default lower version over time you might -want to opt for only using the latest and most security protocol versions. +even if curl itself raises its default lower version over time you might want +to opt for only using the latest and most security protocol versions. These options take a lowest acceptable version and optionally a maximum. If the server cannot negotiate a connection with that condition, the transfer -will fail. +fails. Example: @@ -38,7 +38,7 @@ A TLS-using client needs to verify that the server it speaks to is the correct and trusted one. This is done by verifying that the server's certificate is signed by a Certificate Authority (CA) for which curl has a public key for and that the certificate contains the server's name. Failing any of these checks -will cause the transfer to fail. +cause the transfer to fail. For development purposes and for experimenting, curl allows an application to switch off either or both of these checks for the server or for an HTTPS @@ -55,7 +55,7 @@ proxy. Optionally, you can tell curl to verify the certificate's public key against a known hash using `CURLOPT_PINNEDPUBLICKEY` or `CURLOPT_PROXY_PINNEDPUBLICKEY`. -Here too, a mismatch will cause the transfer to fail. +Here too, a mismatch causes the transfer to fail. ## Authentication diff --git a/libcurl/sharing.md b/libcurl/sharing.md index 89a61c92d9..8e30e143b6 100644 --- a/libcurl/sharing.md +++ b/libcurl/sharing.md @@ -31,9 +31,9 @@ You then set up the corresponding transfer to use this share object: curl_easy_setopt(curl, CURLOPT_SHARE, share); -Transfers done with this `curl` handle will thus use and store its cookie and -dns information in the `share` handle. You can set several easy handles to -share the same share object. +Transfers done with this `curl` handle uses and stores its cookie and dns +information in the `share` handle. You can set several easy handles to share +the same share object. ## What to share @@ -48,10 +48,10 @@ resolved host names for a while to make subsequent lookups faster. resume information for SSL connections to be able to resume a previous connection faster. -`CURL_LOCK_DATA_CONNECT` - when set, this handle will use a shared connection -cache and thus will probably be more likely to find existing connections to -re-use etc, which may result in faster performance when doing multiple -transfers to the same host in a serial manner. +`CURL_LOCK_DATA_CONNECT` - when set, this handle uses a shared connection +cache and thus is more likely to find existing connections to re-use etc, +which may result in faster performance when doing multiple transfers to the +same host in a serial manner. ## Locking @@ -89,7 +89,7 @@ With the corresponding unlock callback could look like: ## Unshare -A transfer will use the share object during its transfer and share what that +A transfer uses the share object during its transfer and share what that object has been specified to share with other handles sharing the same object. In a subsequent transfer, `CURLOPT_SHARE` can be set to NULL to prevent a @@ -97,9 +97,9 @@ transfer from continuing to share. It that case, the handle may start the next transfer with empty caches for the data that was previously shared. Between two transfers, a share object can also get updated to share a -different set of properties so that the handles that share that object will -share a different set of data next time. You remove an item to share from a -shared object with the curl_share_setopt()'s `CURLSHOPT_UNSHARE` option like -this when unsharing DNS data: +different set of properties so that the handles that share that object shares +a different set of data next time. You remove an item to share from a shared +object with the curl_share_setopt()'s `CURLSHOPT_UNSHARE` option like this +when unsharing DNS data: curl_share_setopt(share, CURLSHOPT_UNSHARE, CURL_LOCK_DATA_DNS); diff --git a/libcurl/url/append-query.md b/libcurl/url/append-query.md index 78e5663298..4f33447b74 100644 --- a/libcurl/url/append-query.md +++ b/libcurl/url/append-query.md @@ -8,14 +8,13 @@ application can then add the string `hat=1` to the query part like this: rc = curl_url_set(urlp, CURLUPART_QUERY, "hat=1", CURLU_APPENDQUERY); -It will even notice the lack of an ampersand (`&`) separator so it will inject -one too, and the handle's full URL would then equal +It even notices the lack of an ampersand (`&`) separator so it injects one +too, and the handle's full URL would then equal `https://example.com/?shoes=2&hat=1`. The appended string can of course also get URL encoded on add, and if asked, -the encoding will skip the '=' character. For example, append `candy=M&M` to -what we already have, and URL encode it to deal with the ampersand in the -data: +the encoding skips the `=` character. For example, append `candy=M&M` to what +we already have, and URL encode it to deal with the ampersand in the data: rc = curl_url_set(urlp, CURLUPART_QUERY, "candy=M&M", CURLU_APPENDQUERY | CURLU_URLENCODE); diff --git a/libcurl/url/get-part.md b/libcurl/url/get-part.md index 8e03daa20b..751d565fbf 100644 --- a/libcurl/url/get-part.md +++ b/libcurl/url/get-part.md @@ -48,8 +48,7 @@ looks like this: http://joe:7Hbz@example.com:8080/images?id=5445#footer -When this URL is parsed by curl, it will store the different components like -this: +When this URL is parsed by curl, it stores the different components like this: | text | part | |---------------|----------------------| @@ -79,4 +78,4 @@ For this URL, curl extracts: | `eth0` | `CURLUPART_ZONEID` | | `/` | `CURLUPART_PATH` | -Asking for any other component will return non-zero as they are missing. +Asking for any other component returns non-zero as they are missing. diff --git a/libcurl/url/get.md b/libcurl/url/get.md index 78de5a8fe4..7189d17c22 100644 --- a/libcurl/url/get.md +++ b/libcurl/url/get.md @@ -21,46 +21,46 @@ function's fourth argument. You can set zero, one or more bits. ## `CURLU_DEFAULT_PORT` -If the URL handle has no port number stored, this option will make +If the URL handle has no port number stored, this option makes `curl_url_get()` return the default port for the used scheme. ## `CURLU_DEFAULT_SCHEME` -If the handle has no scheme stored, this option will make `curl_url_get()` -return the default scheme instead of error. +If the handle has no scheme stored, this option makes `curl_url_get()` return +the default scheme instead of error. ## `CURLU_NO_DEFAULT_PORT` Instructs `curl_url_get()` to *not* use a port number in the generated URL if that port number matches the default port used for the scheme. For example, if -port number 443 is set and the scheme is `https`, the extracted URL will not +port number 443 is set and the scheme is `https`, the extracted URL does not include the port number. ## `CURLU_URLENCODE` -If set, will make `curl_url_get()` URL encode the host name part when a full -URL is retrieved. If not set (default), libcurl returns the URL with the host -name "raw" to support IDN names to appear as-is. IDN host names are typically -using non-ASCII bytes that otherwise will be percent-encoded. +This flag makes `curl_url_get()` URL encode the host name part when a full URL +is retrieved. If not set (default), libcurl returns the URL with the host name +"raw" to support IDN names to appear as-is. IDN host names are typically using +non-ASCII bytes that otherwise are percent-encoded. -Note that even when not asking for URL encoding, the `%` (byte 37) will be URL +Note that even when not asking for URL encoding, the `%` (byte 37) is URL encoded in host names to make sure the host name remains valid. ## `CURLU_URLDECODE` -Tells `curl_url_get()` to URL decode the contents before returning it. It will -not attempt to decode the scheme, the port number or the full URL. The query -component will also get plus-to-space conversion as a bonus when this bit is -set. Note that this URL decoding is charset unaware and you will get a zero +Tells `curl_url_get()` to URL decode the contents before returning it. It does +attempt to decode the scheme, the port number or the full URL. The query +component also gets plus-to-space conversion as a bonus when this bit is +set. Note that this URL decoding is charset unaware and you get a zero terminated string back with data that could be intended for a particular encoding. If there are any byte values lower than 32 in the decoded string, -the get operation will return an error instead. +the get operation instead returns error. ## `CURLU_PUNYCODE` If set and `CURLU_URLENCODE` is not set, and asked to retrieve the `CURLUPART_HOST` or `CURLUPART_URL` parts, libcurl returns the host name in its punycode version if it contains any non-ASCII octets (and is an IDN -name). If libcurl is built without IDN capabilities, using this bit will make +name). If libcurl is built without IDN capabilities, using this bit makes `curl_url_get()` return `CURLUE_LACKS_IDN` if the host name contains anything outside the ASCII range. diff --git a/libcurl/url/parse.md b/libcurl/url/parse.md index 4e95c2ff83..915422dbf6 100644 --- a/libcurl/url/parse.md +++ b/libcurl/url/parse.md @@ -28,7 +28,7 @@ from that: like spaces or "control characters". If the passed in string does not use a scheme, assume that the default one was intended. The default scheme is HTTPS. If this is not set, a URL without a -scheme part will not be accepted as valid. Overrides the `CURLU_GUESS_SCHEME` +scheme part is not accepted as valid. Overrides the `CURLU_GUESS_SCHEME` option if both are set. ## `CURLU_GUESS_SCHEME` @@ -36,7 +36,7 @@ option if both are set. Makes libcurl allow the URL to be set without a scheme and it instead "guesses" which scheme that was intended based on the host name. If the outermost sub-domain name matches DICT, FTP, IMAP, LDAP, POP3 or SMTP then -that scheme will be used, otherwise it picks HTTP. Conflicts with the +that scheme is used, otherwise it picks HTTP. Conflicts with the `CURLU_DEFAULT_SCHEME` option which takes precedence if both are set. ## `CURLU_NO_AUTHORITY` @@ -58,8 +58,7 @@ used for transfers is called `CURLOPT_PATH_AS_IS`. Makes the URL parser allow space (ASCII 32) where possible. The URL syntax does normally not allow spaces anywhere, but they should be encoded as `%20` or `+`. When spaces are allowed, they are still not allowed in the -scheme. When space is used and allowed in a URL, it will be stored as-is -unless `CURLU_URLENCODE` is also set, which then makes libcurl URL-encode the -space before stored. This affects how the URL will be constructed when -`curl_url_get()` is subsequently used to extract the full URL or individual -parts. +scheme. When space is used and allowed in a URL, it is stored as-is unless +`CURLU_URLENCODE` is also set, which then makes libcurl URL-encode the space +before stored. This affects how the URL is constructed when `curl_url_get()` +is subsequently used to extract the full URL or individual parts. diff --git a/libcurl/url/redirect.md b/libcurl/url/redirect.md index 0b7886342d..2c0a976e1f 100644 --- a/libcurl/url/redirect.md +++ b/libcurl/url/redirect.md @@ -1,7 +1,7 @@ # Redirect to a relative URL -When the handle already has parsed a URL, setting a second relative URL will -make it "redirect" to adapt to it. +When the handle already has parsed a URL, setting a second relative URL makes +it "redirect" to adapt to it. Example, first set the original URL then set the one we "redirect" to: diff --git a/libcurl/verbose.md b/libcurl/verbose.md index 20bdc4d430..4391e7545f 100644 --- a/libcurl/verbose.md +++ b/libcurl/verbose.md @@ -12,10 +12,10 @@ applications or debugging libcurl itself, is to enable "verbose mode" with CURLcode ret = curl_easy_setopt(handle, CURLOPT_VERBOSE, 1L); -When libcurl is told to be verbose it will mention transfer-related details -and information to stderr while the transfer is ongoing. This is awesome to -figure out why things fail and to learn exactly what libcurl does when you ask -it different things. You can redirect the output elsewhere by changing stderr +When libcurl is told to be verbose it outputs transfer-related details and +information to stderr while the transfer is ongoing. This is awesome to figure +out why things fail and to learn exactly what libcurl does when you ask it +different things. You can redirect the output elsewhere by changing stderr with `CURLOPT_STDERR` or you can get even more info in a fancier way with the debug callback (explained further in a later section). @@ -32,8 +32,8 @@ TLS or SSH based protocols when capturing the data off the network for debugging is not practical. When you set the `CURLOPT_DEBUGFUNCTION` option, you still need to have -`CURLOPT_VERBOSE` enabled but with the trace callback set libcurl will use -that callback instead of its internal handling. +`CURLOPT_VERBOSE` enabled but with the trace callback set libcurl uses that +callback instead of its internal handling. The trace callback should match a prototype like this: @@ -45,10 +45,10 @@ data passed to the callback (data in/out, header in/out, TLS data in/out and "text"), **data** is a pointer pointing to the data being **size** number of bytes. **user** is the custom pointer you set with `CURLOPT_DEBUGDATA`. -The data pointed to by **data** *will not* be zero terminated, but will be -exactly of the size as told by the **size** argument. +The data pointed to by **data** is *not* null terminated, but is exactly of +the size as told by the **size** argument. -The callback must return 0 or libcurl will consider it an error and abort the +The callback must return 0 or libcurl considers it an error and aborts the transfer. On the curl website, we host an example called @@ -97,7 +97,7 @@ example, include TLS and HTTP/2 details: /* log details of HTTP/2 and SSL handling */ curl_global_trace("http/2,ssl"); -The exact set of ares will vary, but here are some ones to try: +The exact set of options varies, but here are some ones to try: | area | description | |----------|-------------------------------------------------| diff --git a/libcurl/ws/concept.md b/libcurl/ws/concept.md index 9f3b04a2a8..73a3d6bd4f 100644 --- a/libcurl/ws/concept.md +++ b/libcurl/ws/concept.md @@ -31,6 +31,6 @@ transfer is therefor considered an error when asking for a WebSocket transfer. ## Automatic `PONG` -If not using [raw mode](options.md), libcurl will automatically respond with -the appropriate `PONG` response for incoming `PING` frames and not expose them -in the API. +If not using [raw mode](options.md), libcurl automatically responds with the +appropriate `PONG` response for incoming `PING` frames and does not expose +them in the API. diff --git a/libcurl/ws/meta.md b/libcurl/ws/meta.md index eff9c1f530..0c909e2275 100644 --- a/libcurl/ws/meta.md +++ b/libcurl/ws/meta.md @@ -25,15 +25,15 @@ The `flags' field is a bitmask describing details of data. ### `CURLWS_TEXT` The buffer contains text data. Note that this makes a difference to WebSocket -but libcurl itself will not make any verification of the content or -precautions that you actually receive valid UTF-8 content. +but libcurl itself does make any verification of the content or precautions +that you actually receive valid UTF-8 content. ### `CURLWS_BINARY` This is binary data. ### `CURLWS_FINAL` This is the final fragment of the message, if this is not set, it implies that -there will be another fragment coming as part of the same message. +there is another fragment coming as part of the same message. ### `CURLWS_CLOSE` This transfer is now closed. diff --git a/libcurl/ws/options.md b/libcurl/ws/options.md index ea149480b3..7970fb15e3 100644 --- a/libcurl/ws/options.md +++ b/libcurl/ws/options.md @@ -8,10 +8,10 @@ only a single bit used. ## Raw mode -By setting the `CURLWS_RAW_MODE` bit in the bitmask, libcurl will deliver all +By setting the `CURLWS_RAW_MODE` bit in the bitmask, libcurl delivers all WebSocket traffic "raw" to the write callback instead of parsing the WebSocket traffic itself. This raw mode is intended for applications that maybe implemented WebSocket handling already and want to just move over to use libcurl for the transfer and maintain its own WebSocket logic. -In raw mode, libcurl will also not handle any PING traffic automatically. +In raw mode, libcurl also does not handle any PING traffic automatically. diff --git a/libcurl/ws/read.md b/libcurl/ws/read.md index 04f234c81a..4abefe5b57 100644 --- a/libcurl/ws/read.md +++ b/libcurl/ws/read.md @@ -5,7 +5,7 @@ these two methods: ## Write callback -When the `CURLOPT_CONNECT_ONLY` option is **not** set, WebSocket data will be +When the `CURLOPT_CONNECT_ONLY` option is **not** set, WebSocket data is delivered to the write callback. In the default "frame mode" (as opposed to "raw mode"), libcurl delivers parts diff --git a/libcurl/ws/write.md b/libcurl/ws/write.md index 056166181b..491890a187 100644 --- a/libcurl/ws/write.md +++ b/libcurl/ws/write.md @@ -39,16 +39,15 @@ until all pieces have been sent that constitute the whole fragment. ### `CURLWS_TEXT` The buffer contains text data. Note that this makes a difference to WebSocket -but libcurl itself will not make any verification of the content or -precautions that you actually send valid UTF-8 content. +but libcurl itself does not perform any verification of the content or make +any precautions that you actually send valid UTF-8 content. ### `CURLWS_BINARY` This is binary data. ### `CURLWS_CONT` -This is not the final fragment of the message, which implies that there will -be another fragment coming as part of the same message where this bit is not -set. +This is not the final fragment of the message, which implies that there is +another fragment coming as part of the same message where this bit is not set. ### `CURLWS_CLOSE` Close this transfer. @@ -61,10 +60,10 @@ This as a pong. ### `CURLWS_OFFSET` -The provided data is only a partial fragment and there will be more in a -following call to `curl_ws_send()`. When sending only a piece of the fragment -like this, the `fragsize` must be provided with the total expected frame size -in the first call and it needs to be zero in subsequent calls. +The provided data is only a partial fragment and there is more data coming in +a following call to `curl_ws_send()`. When sending only a piece of the +fragment like this, the `fragsize` must be provided with the total expected +frame size in the first call and it needs to be zero in subsequent calls. When `CURLWS_OFFSET` is set, no other flag bits should be set as this is a continuation of a previous send and the bits describing the fragments were set diff --git a/project/bugs.md b/project/bugs.md index 1cd74de404..55626fa478 100644 --- a/project/bugs.md +++ b/project/bugs.md @@ -65,8 +65,9 @@ a number of volunteers who also help out by running the test suite automatically a few times per day to make sure the latest commits are tested. This way we discover the worst flaws not long after their introduction. -We do not test everything, and even when we try to test things there will always -be subtle bugs that get through, some of which are only discovered years later. +We do not test everything, and even when we try to test things there are +always subtle bugs that get through, some that are only discovered years +later. Due to the nature of different systems and funny use cases on the Internet, eventually some of the best testing is done by users when they run the code to diff --git a/project/future.md b/project/future.md index b5b3d54f9b..700d06cfa3 100644 --- a/project/future.md +++ b/project/future.md @@ -8,6 +8,8 @@ We are looking forward to support for more protocols, support for more features The project casually maintains a [TODO](https://curl.se/docs/todo.html) file holding a bunch of ideas that we could work on in the future. It also keeps a [KNOWN\_BUGS](https://curl.se/docs/knownbugs.html) document with a list of known problems we would like to fix. -There is a [ROADMAP](https://curl.se/dev/roadmap.html) document that describes some plans for the short-term that some of the active developers thought they would work on next. Of course, we can not promise that we will always follow it perfectly. +There is a [ROADMAP](https://curl.se/dev/roadmap.html) document that describes +some plans for the short-term that some of the active developers thought they +would work on next. Of course, we can not promise that we always follow it. We are highly dependent on developers to join in and work on what they want to get done, be it bug fixes or new features. diff --git a/project/maillists.md b/project/maillists.md index 08d6e05294..a2ab3d0f5e 100644 --- a/project/maillists.md +++ b/project/maillists.md @@ -15,9 +15,8 @@ See [curl-users](https://lists.haxx.se/listinfo/curl-users) ## curl-library The main development list, and also for users of libcurl. We discuss how to -use libcurl in applications as well as development of libcurl itself. You will -find lots of questions on libcurl behavior, debugging and documentation -issues. +use libcurl in applications as well as development of libcurl +itself. Questions on libcurl behavior, debugging and documentation issues etc. See [curl-library](https://lists.haxx.se/listinfo/curl-library) diff --git a/project/name.md b/project/name.md index 6fe3bbb3f6..685b4e6bdd 100644 --- a/project/name.md +++ b/project/name.md @@ -33,9 +33,9 @@ Soon after our curl was created another "curl" appeared that created a programming language. That curl still [exists](https://www.curl.com). Several libcurl bindings for various programming languages use the term "curl" -or "CURL" in part or completely to describe their bindings. Sometimes -you will find users talking about curl but referring to neither the command-line tool -nor the library that is made by this project. +or "CURL" in part or completely to describe their bindings. Sometimes you find +users talking about curl but referring to neither the command-line tool nor +the library that is made by this project. ## As a verb diff --git a/project/security.md b/project/security.md index 95f2e01dda..45edc7ef34 100644 --- a/project/security.md +++ b/project/security.md @@ -3,8 +3,8 @@ Security is a primary concern for us in the curl project. We take it seriously and we work hard on providing secure and safe implementations of all protocols and related code. As soon as we get knowledge about a security related problem -or just a suspected problem, we deal with it and we will attempt to provide a -fix and security notice no later than in the next pending release. +or just a suspected problem, we deal with it and we attempt to provide a fix +and security notice no later than in the next pending release. We use a responsible disclosure policy, meaning that we prefer to discuss and work on security fixes out of the public eye and we alert the vendors on the diff --git a/protocols/network.md b/protocols/network.md index 0f344baa17..2ee47d927f 100644 --- a/protocols/network.md +++ b/protocols/network.md @@ -27,7 +27,7 @@ typically embedded in the URL that we work with when we use tools like curl or a browser. We might use a URL like `http://example.com/index.html`, which means the -client will connect to and communicate with the host named example.com. +client connects to and communicates with the host named example.com. ## Host name resolving @@ -41,10 +41,10 @@ addresses—all the names on the Internet, really. The computer normally already knows the address of a computer that runs the DNS server as that is part of setting up the network. -The network client will therefore ask the DNS server, "Hello, please give me -all the addresses for example.com", and the server responds with a list of -them. Or in case of spelling errors, it can answer back that the name does not -exist. +The network client therefore asks the DNS server, *Hello, please give me all +the addresses for `example.com`*. The DNS server responds with a list of +addresses back. Or in case of spelling errors, it can answer back that the +name does not exist. ## Establish a connection @@ -56,10 +56,10 @@ Protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)) or connecting an invisible string between two computers. Once established, the string can be used to send a stream of data in both directions. -If the client has received more than one address for the host, it will -traverse that list of addresses when connecting, and if one address fails it -will try to connect to the next one, repeating until either one address works -or they have all failed. +If the client has received more than one address for the host, it traverses +that list of addresses when connecting, and if one address fails it tries to +connect to the next one, repeating until either one address works or they have +all failed. ## Connect to port numbers @@ -74,17 +74,17 @@ specifies a *scheme* called `HTTP` which tells the client that it should try TCP port number 80 on the server by default. If the URL uses `HTTPS` instead, the default port number is 443. -The URL can include a custom port number. If a port number is not specified, -the client will use the default port for the scheme used in the URL. +The URL can include a custom port number. If a port number is not specified, +the client uses the default port for the scheme used in the URL. ## Security -After a TCP connection has been established, many transfers will require that -both sides negotiate a better security level before continuing (if for example +After a TCP connection has been established, many transfers require that both +sides negotiate a better security level before continuing (if for example `HTTPS` is used), which is done with TLS ([Transport Layer Security](https://en.wikipedia.org/wiki/Transport_Layer_Security)). If so, the -client and server will do a TLS handshake first, and will continue further -only if that succeeds. +client and server do a TLS handshake first, and continue further only if that +succeeds. If the connection is done using QUIC, the TLS handshake is done automatically in the connect phase. diff --git a/protocols/protocols.md b/protocols/protocols.md index 029f714836..eda983e7e1 100644 --- a/protocols/protocols.md +++ b/protocols/protocols.md @@ -100,11 +100,11 @@ the specs. In the curl project we use the published specs as rules on how to act until we learn anything else. If popular alternative implementations act differently than what we think the spec says and that alternative behavior is what works -widely on the big Internet, then chances are we will change foot and instead -decide to act like those others. If a server refuses to talk with us when we -think we follow the spec but works fine when we bend the rules ever so -slightly, then we probably end up bending them exactly that way—if we can -still work successfully with other implementations. +widely on the big Internet, then chances are we change foot and instead decide +to act like those others. If a server refuses to talk with us when we think we +follow the spec but works fine when we bend the rules ever so slightly, then +we probably end up bending them exactly that way—if we can still work +successfully with other implementations. Ultimately, it is a personal decision and up for discussion in every case where we think a spec and the real world do not align. diff --git a/source.md b/source.md index cd78c22550..0cf79e62c0 100644 --- a/source.md +++ b/source.md @@ -21,5 +21,5 @@ repository](https://github.com/curl/curl/). git clone https://github.com/curl/curl.git -This will get the latest curl code downloaded and unpacked in a directory on -your local system. +This gets the latest curl code downloaded and unpacked in a directory on your +local system. diff --git a/source/contributing.md b/source/contributing.md index 323e3a9829..d1a2b3393b 100644 --- a/source/contributing.md +++ b/source/contributing.md @@ -14,7 +14,7 @@ infrastructure), web content, user support and more. Send your changes or suggestions to the team and by working together we can fix problems, improve functionality, clarify documentation, add features or -make anything else you help out with land in the proper place. We will make sure +make anything else you help out with land in the proper place. We make sure improved code and docs get merged into the source tree properly and other sorts of contributions are suitable received. @@ -48,14 +48,14 @@ Of course, you can also add contents to the project that is not code, like documentation, graphics or website contents, but the general rules apply equally to that. -If you are fixing a problem you have or a problem that others are reporting, we -will be thrilled to receive your fix and merge it as soon as possible, +If you are fixing a problem you have or a problem that others are reporting, +we are thrilled to receive your fixes and merge them as soon as possible, ## What not to add There are no good rules that say what features you can or cannot add or that -we will never accept, but let me instead try to mention a few things you -should avoid to get less friction and to be successful, faster: +we never accept, but let me instead try to mention a few things you should +avoid to get less friction and to be successful, faster: - Do not write up a huge patch first and then send it to the list for discussion. Always start out by discussing on the list, and send your @@ -66,7 +66,7 @@ should avoid to get less friction and to be successful, faster: - When introducing things in the code, you need to follow the style and architecture that already exists. When you add code to the ordinary transfer code path, it must, for example, work asynchronously in a non-blocking - manner. We will not accept new code that introduces blocking behaviors—we + manner. We do not accept new code that introduces blocking behaviors—we already have too many of those that we have not managed to remove yet. - Quick hacks or dirty solutions that have a high risk of not working on @@ -88,7 +88,7 @@ While git is sometimes not the easiest tool to learn and master, all the basic steps a casual developer and contributor needs to know are straight-forward and do not take much time or effort to learn. -This book will not help you learn git. All software developers in this day and +This book does not help you learn git. All software developers in this day and age should learn git anyway. The curl git tree can be browsed with a web browser on our GitHub page at diff --git a/source/layout.md b/source/layout.md index ba7bd20ba6..1d46192520 100644 --- a/source/layout.md +++ b/source/layout.md @@ -7,7 +7,7 @@ component of the curl command-line tool. ## root We try to keep the number of files in the source tree root to a minimum. You -will see a slight difference in files if you check a release archive compared +might see a slight difference in files if you check a release archive compared to what is stored in the git repository as several files are generated by the release scripts. diff --git a/source/opensource/license.md b/source/opensource/license.md index fff82c4ca9..4e21f342a4 100644 --- a/source/opensource/license.md +++ b/source/opensource/license.md @@ -34,4 +34,4 @@ project—but you may not claim that you wrote it. Early on in the project we iterated over a few different other licenses before we settled on this. We started out GPL, then tried MPL and landed on this MIT -derivative. We will never change the license again. +derivative. We do not intend to ever change the license again. diff --git a/source/reportvuln.md b/source/reportvuln.md index 7c48046c46..ea860dea9a 100644 --- a/source/reportvuln.md +++ b/source/reportvuln.md @@ -13,10 +13,10 @@ The typical process for handling a new security vulnerability is as follows. No information should be made public about a vulnerability until it is formally announced at the end of this process. That means, for example, that a -bug tracker entry must NOT be created to track the issue since that will make -the issue public and it should not be discussed on any of the project's public -mailing lists. Also messages associated with any commits should not make -any reference to the security nature of the commit if done prior to the public +bug tracker entry must NOT be created to track the issue since that makes the +issue public and it should not be discussed on any of the project's public +mailing lists. Also messages associated with any commits should not make any +reference to the security nature of the commit if done prior to the public announcement. - The person discovering the issue, the reporter, reports the vulnerability on @@ -91,4 +91,4 @@ working. You must have been around for a good while and you should have no plans on vanishing in the near future. We do not make the list of participants public mostly because it tends to vary -somewhat over time and a list somewhere will only risk getting outdated. +somewhat over time and a list somewhere only risks getting outdated. diff --git a/source/style.md b/source/style.md index b202853977..6f6db56212 100644 --- a/source/style.md +++ b/source/style.md @@ -18,7 +18,7 @@ particularly unusual rules in our set of rules. We also work hard on writing code that is warning-free on all the major platforms and in general on as many platforms as possible. Code that obviously -will cause warnings will not be accepted as-is. +causes warnings is not accepted as-is. ## Naming @@ -213,7 +213,7 @@ int size = sizeof(int); Some statements cannot be completed on a single line because the line would be too long, the statement too hard to read, or due to other style guidelines -above. In such a case the statement will span multiple lines. +above. In such a case the statement spans multiple lines. If a continuation line is part of an expression or sub-expression then you should align on the appropriate column so that it is easy to tell what part of diff --git a/usingcurl/connections/local-port.md b/usingcurl/connections/local-port.md index 3fb30963b5..6ea6a7afee 100644 --- a/usingcurl/connections/local-port.md +++ b/usingcurl/connections/local-port.md @@ -15,7 +15,7 @@ For situations like this, you can specify which local ports curl should bind the connection to. You can specify a single port number to use, or a range of ports. We recommend using a range because ports are scarce resources and the exact one you want may already be in use. If you ask for a local port number -(or range) that curl cannot obtain for you, it will exit with a failure. +(or range) that curl cannot obtain for you, it exits with a failure. Also, on most operating systems you cannot bind to port numbers below 1024 without having a higher privilege level (root) and we generally advise diff --git a/usingcurl/connections/name.md b/usingcurl/connections/name.md index 903954918b..e827606a97 100644 --- a/usingcurl/connections/name.md +++ b/usingcurl/connections/name.md @@ -28,14 +28,14 @@ your local machine and you want to have curl ask for the index html: curl -H "Host: www.example.com" http://localhost/ -When setting a custom `Host:` header and using cookies, curl will extract the -custom name and use that as host when matching cookies to send off. +When setting a custom `Host:` header and using cookies, curl extracts the +custom name and uses that as host when matching cookies to send off. The `Host:` header is not enough when communicating with an HTTPS server. With HTTPS there is a separate extension field in the TLS protocol called SNI (Server Name Indication) that lets the client tell the server the name of the -server it wants to talk to. curl will only extract the SNI name to send from -the given URL. +server it wants to talk to. curl only extracts the SNI name to send from the +given URL. ## Provide a custom IP address for a name @@ -50,16 +50,16 @@ redirects of this sort, which can be handy if the URL you work with uses HTTP redirects or if you just want to have your command line work with multiple URLs. -`--resolve` inserts the address into curl's DNS cache, so it will effectively -make curl believe that is the address it got when it resolved the name. +`--resolve` inserts the address into curl's DNS cache, so it effectively makes +curl believe that is the address it got when it resolved the name. -When talking HTTPS, this will send SNI for the name in the URL and curl will -verify the server's response to make sure it serves for the name in the URL. +When talking HTTPS, this sends SNI for the name in the URL and curl verifies +the server's response to make sure it serves for the name in the URL. The pattern you specify in the option needs be a host name and its -corresponding port number and only if that exact pair is used in the URL will -the address be substituted. For example, if you want to replace a host name in -an HTTPS URL on its default port number, you need to tell curl it is for port +corresponding port number and only if that exact pair is used in the URL is +the address substituted. For example, if you want to replace a host name in an +HTTPS URL on its default port number, you need to tell curl it is for port 443, like: curl --resolve example.com:443:192.168.0.1 https://example.com/ @@ -86,8 +86,8 @@ separately, you can tell curl: curl --connect-to www.example.com:80:load1.example.com:80 http://www.example.com It redirects from a SOURCE NAME + SOURCE PORT to a DESTINATION NAME + -DESTINATION PORT. curl will then resolve the `load1.example.com` name and -connect, but in all other ways still assume it is talking to +DESTINATION PORT. curl then resolves the `load1.example.com` name and +connects, but in all other ways still assumes it is talking to `www.example.com`. ## Name resolve tricks with c-ares diff --git a/usingcurl/connections/timeout.md b/usingcurl/connections/timeout.md index c725ef02a8..485cb06f96 100644 --- a/usingcurl/connections/timeout.md +++ b/usingcurl/connections/timeout.md @@ -1,11 +1,11 @@ # Connection timeout -curl will typically make a TCP connection to the host as an initial part of -its network transfer. This TCP connection can fail or be slow, if there are -shaky network conditions or faulty remote servers. +curl typically makes a TCP connection to the host as an initial part of its +network transfer. This TCP connection can fail or be slow, if there are shaky +network conditions or faulty remote servers. -To reduce the impact on your scripts or other use, you can set the maximum time -in seconds which curl will allow for the connection attempt. With +To reduce the impact on your scripts or other use, you can set the maximum +time in seconds which curl allows for the connection attempt. With `--connect-timeout` you tell curl the maximum time to allow for connecting, and if curl has not connected in that time it returns a failure. diff --git a/usingcurl/copyas.md b/usingcurl/copyas.md index 1e052a3f3a..d6100248aa 100644 --- a/usingcurl/copyas.md +++ b/usingcurl/copyas.md @@ -12,7 +12,16 @@ You get the site shown with Firefox's network tools. You then right-click on the ## From Chrome and Edge -When you pop up the More tools->Developer mode in Chrome or Edge, and you select the Network tab you see the HTTP traffic used to get the resources of the site. On the line of the specific resource you are interested in, you right-click with the mouse and you select "Copy as cURL" and it will generate a command line for you in your clipboard. Paste that in a shell to get a curl command line that makes the transfer. This feature is available by default in all Chrome and Chromium installations. _(Note: Chromium browsers in Windows may generate an incorrect command line that is misquoted due to a_ [_bug_](https://bugs.chromium.org/p/chromium/issues/detail?id=1242803) _in Chromium)._ +When you pop up the More tools->Developer mode in Chrome or Edge, and you +select the Network tab you see the HTTP traffic used to get the resources of +the site. On the line of the specific resource you are interested in, you +right-click with the mouse and you select "Copy as cURL" and it generates a +command line for you in your clipboard. Paste that in a shell to get a curl +command line that makes the transfer. This feature is available by default in +all Chrome and Chromium installations. _(Note: Chromium browsers in Windows +may generate an incorrect command line that is misquoted due to a_ +[_bug_](https://bugs.chromium.org/p/chromium/issues/detail?id=1242803) _in +Chromium)._ ![copy as curl with Chrome](chrome-copy-as-curl.png) @@ -32,8 +41,18 @@ If this is something you would like to get done more often, you probably find us ## Not perfect -These methods all give you a command line to reproduce their HTTP transfers. You will also learn they are still often not the perfect solution to your problems. Why? Well mostly because these tools are written to rerun the _exact_ same request that you copied, while you often want to rerun the same logic but not sending an exact copy of the same cookies and file contents etc. - -These tools will give you command lines with static and fixed cookie contents to send in the request, because that is the contents of the cookies that were sent in the browser's requests. You will most likely want to rewrite the command line to dynamically adapt to whatever the content is in the cookie that the server told you in a previous response. Etc. - -The copy as curl functionality is also often notoriously bad at using `-F` and instead they provide handcrafted `--data-binary` solutions including the mime separator strings etc. +These methods all give you a command line to reproduce their HTTP transfers. +They are often not the perfect solution to your problems. Why? Well mostly +because these tools are written to rerun the _exact_ same request that you +copied, while you often want to rerun the same logic but not sending an exact +copy of the same cookies and file contents etc. + +These tools give you command lines with static and fixed cookie contents to +send in the request, because that is the contents of the cookies that were +sent in the browser's requests. You most likely want to rewrite the command +line to dynamically adapt to whatever the content is in the cookie that the +server told you in a previous response. Etc. + +The copy as curl functionality is also often notoriously bad at using `-F` and +instead they provide handcrafted `--data-binary` solutions including the mime +separator strings etc. diff --git a/usingcurl/downloads/browsers.md b/usingcurl/downloads/browsers.md index 3db97012ea..8cfffe7534 100644 --- a/usingcurl/downloads/browsers.md +++ b/usingcurl/downloads/browsers.md @@ -12,18 +12,18 @@ all what you see in your browser window. Curl only gets exactly what you ask it to get and it never parses the actual content—the data—that the server delivers. A browser gets data and it activates different parsers depending on what kind of content it thinks it -gets. For example, if the data is HTML it will parse it to display a web page -and possibly download other sub resources such as images, JavaScript and CSS -files. When curl downloads HTML it will just get that single HTML resource, -even if it, when parsed by a browser, would trigger a whole busload of more +gets. For example, if the data is HTML it parses it to display a web page and +possibly download other sub resources such as images, JavaScript and CSS +files. When curl downloads HTML it just gets that single HTML resource, even +if it, when parsed by a browser, would trigger a whole busload of more downloads. If you want curl to download any sub-resources as well, you need to pass those URLs to curl and ask it to get those, just like any other URLs. Clients also differ in how they send their requests, and some aspects of a request for a resource include, for example, format preferences, asking for compressed data, or just telling the server from which previous page we are -"coming from". curl's requests will differ a little or a lot from how your -browser sends its requests. +"coming from". curl's requests differ a little or a lot from how your browser +sends its requests. ## Server differences @@ -47,10 +47,10 @@ this with permission from the server owners or admins. ## Intermediaries' fiddlings -Intermediaries are proxies, explicit or implicit ones. Some environments will -force you to use one or you may choose to use one for various reasons, but -there are also the transparent ones that will intercept your network traffic -silently and proxy it for you no matter what you want. +Intermediaries are proxies, explicit or implicit ones. Some environments force +you to use one or you may choose to use one for various reasons, but there are +also the transparent ones that intercept your network traffic silently and +proxy it for you no matter what you want. Proxies are "middle men" that terminate the traffic and then act on your behalf to the remote server. This can introduce all sorts of explicit diff --git a/usingcurl/downloads/charsets.md b/usingcurl/downloads/charsets.md index 0f5f3e8256..e1d70490b1 100644 --- a/usingcurl/downloads/charsets.md +++ b/usingcurl/downloads/charsets.md @@ -1,9 +1,9 @@ # HTML and charsets -curl will download the exact binary data that the server sends. This might be -of importance to you in case, for example, you download an HTML page or -other text data that uses a certain character encoding that your browser then -displays as expected. curl will then not translate the arriving data. +curl downloads the exact binary data that the server sends. This might be of +importance to you in case, for example, you download an HTML page or other +text data that uses a certain character encoding that your browser then +displays as expected. curl does not translate the arriving data. A common example where this causes some surprising results is when a user downloads a web page with something like: diff --git a/usingcurl/downloads/compression.md b/usingcurl/downloads/compression.md index e583e5cda3..364aadec8f 100644 --- a/usingcurl/downloads/compression.md +++ b/usingcurl/downloads/compression.md @@ -2,7 +2,7 @@ curl allows you to ask HTTP and HTTPS servers to provide compressed versions of the data and then perform automatic decompression of it on arrival. In -situations where bandwidth is more limited than CPU this will help you receive +situations where bandwidth is more limited than CPU this helps you receive more data in a shorter amount of time. HTTP compression can be done using two different mechanisms, one which might @@ -14,9 +14,9 @@ curl to use this with the `--compressed` option: curl --compressed http://example.com/ With this option enabled (and if the server supports it) it delivers the data -in a compressed way and curl will decompress it before saving it or sending it -to stdout. This usually means that as a user you do not really see or -experience the compression other than possibly noticing a faster transfer. +in a compressed way and curl decompresses it before saving it or sending it to +stdout. This usually means that as a user you do not really see or experience +the compression other than possibly noticing a faster transfer. The `--compressed` option asks for Content-Encoding compression using one of the supported compression algorithms. There is also the rare @@ -38,9 +38,8 @@ traffic in either direction. HTTP/1.x headers cannot be compressed. HTTP/2 and HTTP/3 headers on the other hands are always compressed and cannot be sent uncompressed. However, as a -convenience to users, curl will always show the headers uncompressed in a -style similar to how they look for HTTP/1.x to make the output and look -consistent. +convenience to users, curl always shows the headers uncompressed in a style +similar to how they look for HTTP/1.x to make the output and look consistent. ## Uploads diff --git a/usingcurl/downloads/content-disp.md b/usingcurl/downloads/content-disp.md index 182291c9b5..4f2fae5e20 100644 --- a/usingcurl/downloads/content-disp.md +++ b/usingcurl/downloads/content-disp.md @@ -11,12 +11,12 @@ saving using that name. `-J` has some problems and risks associated with it that users need to be aware of: -1. It will only use the rightmost part of the suggested filename, so any path -or directories the server suggests will be stripped out. +1. It only uses the rightmost part of the suggested filename, so any path or +directories the server suggests are stripped out. -2. Since the filename is entirely selected by the server, curl will, of -course, overwrite any preexisting local file in your current directory if the -server happens to provide such a filename. +2. Since the filename is entirely selected by the server, curl might overwrite +any preexisting local file in your current directory if the server happens to +provide such a filename (unless you use `--no-clobber`). 3. filename encoding and character sets issues. curl does not decode the name in any way, so you may end up with a URL-encoded filename where a browser diff --git a/usingcurl/downloads/max-filesize.md b/usingcurl/downloads/max-filesize.md index 5f9bb1c35d..136ad79b91 100644 --- a/usingcurl/downloads/max-filesize.md +++ b/usingcurl/downloads/max-filesize.md @@ -1,16 +1,16 @@ # Maximum filesize -When you want to make sure your curl command line will not try to download a -too-large file, you can instruct curl to stop before doing that, if it knows the -size before the transfer starts! Maybe that would use too much bandwidth, -take too long time or you do not have enough space on your hard drive: +When you want to make sure your curl command line does not download a +too-large file, instruct curl to stop before doing that, if it knows the size +before the transfer starts! Maybe that would use too much bandwidth, take too +long time or you do not have enough space on your hard drive: curl --max-filesize 100000 https://example.com/ -Give curl the largest download you will accept in number of bytes and if curl -can figure out the size before the transfer starts it will abort before trying -to download something larger. +Give curl the largest download you can accept in number of bytes and if curl +can figure out the size before the transfer starts it aborts before trying to +download something larger. There are many situations in which curl cannot figure out the size at the time -the transfer starts and this option will not affect those transfers, even if -they may end up larger than the specified amount. +the transfer starts. Such transfers thus are then aborted first when they +actually reach that limit. diff --git a/usingcurl/downloads/multiple.md b/usingcurl/downloads/multiple.md index 9ace9f4af9..f4d535ddc7 100644 --- a/usingcurl/downloads/multiple.md +++ b/usingcurl/downloads/multiple.md @@ -5,9 +5,9 @@ of course, times when you want to store these downloads in nicely named local files. The key to understanding this is that each download URL needs its own "storage -instruction". Without said "storage instruction", curl will default to sending -the data to stdout. If you ask for two URLs and only tell curl where to save -the first URL, the second one is sent to stdout. Like this: +instruction". Without said "storage instruction", curl defaults to sending the +data to stdout. If you ask for two URLs and only tell curl where to save the +first URL, the second one is sent to stdout. Like this: curl -o one.html http://example.com/1 http://example.com/2 @@ -30,6 +30,6 @@ download multiple URLs, use more of them: ## Parallel -Unless told otherwise, curl will download all given URLs in a serial fashion, -one by one. By using `-Z` (or `--parallel`) curl can instead do the transfers +Unless told otherwise, curl downloads all given URLs in a serial fashion, one +by one. By using `-Z` (or `--parallel`) curl can instead do the transfers [in parallel](../../cmdline/urls/parallel.md): several ones at once. diff --git a/usingcurl/downloads/redirects.md b/usingcurl/downloads/redirects.md index 6441166e9d..c31d0a5225 100644 --- a/usingcurl/downloads/redirects.md +++ b/usingcurl/downloads/redirects.md @@ -9,8 +9,8 @@ use of -o or -O superfluous. curl http://example.com/ > example.html Redirecting output to a file redirects all output from curl to that file, so -even if you ask to transfer more than one URL to stdout, redirecting the output -will get all the URLs' output stored in that single file. +even if you ask to transfer more than one URL to stdout, redirecting the +output gets all the URLs' output stored in that single file. curl http://example.com/1 http://example.com/2 > files diff --git a/usingcurl/downloads/resume.md b/usingcurl/downloads/resume.md index e69e1b8f38..9e809f34c8 100644 --- a/usingcurl/downloads/resume.md +++ b/usingcurl/downloads/resume.md @@ -8,8 +8,8 @@ actually having anything already locally present. curl supports resumed downloads on several protocols. Tell it where to start the transfer with the `-C, --continue-at` option that takes either a plain numerical byte counter offset where to start or the string `-` that asks curl -to figure it out itself based on what it knows. When using `-`, curl will use -the destination filename to figure out how much data that is already present +to figure it out itself based on what it knows. When using `-`, curl uses the +destination filename to figure out how much data that is already present locally and ask use that as an offset when asking for more data from the server. diff --git a/usingcurl/downloads/retry.md b/usingcurl/downloads/retry.md index 15b7dde811..72eb43d207 100644 --- a/usingcurl/downloads/retry.md +++ b/usingcurl/downloads/retry.md @@ -1,23 +1,23 @@ # Retry -Normally curl will only make a single attempt to perform a transfer and return -an error if not successful. Using the `--retry` option you can tell curl to -retry certain failed transfers. +Normally curl only makes a single attempt to perform a transfer and returns +error if not successful. Using the `--retry` option you can tell curl to retry +certain failed transfers. If a transient error is returned when curl tries to perform a transfer, it -will retry this number of times before giving up. Setting the number to 0 -makes curl do no retries (which is the default). Transient error means -either: a timeout, an FTP 4xx response code or an HTTP 5xx response code. +retries this number of times before giving up. Setting the number to 0 makes +curl do no retries (which is the default). Transient error means either: a +timeout, an FTP 4xx response code or an HTTP 5xx response code. ## Tweak your retries -When curl is about to retry a transfer, it will first wait one second and then -for all forthcoming retries it will double the waiting time until it reaches -10 minutes which then will be the delay between the rest of the retries. Using +When curl is about to retry a transfer, it first waits one second and then for +all forthcoming retries it doubles the waiting time until it reaches 10 +minutes which then is the delay between the rest of the retries. Using `--retry-delay` you can disable this exponential backoff algorithm and set your own delay between the attempts. With `--retry-max-time` you cap the total -time allowed for retries. The `--max-time` option will still specify the -longest time a single of these transfers is allowed to spend. +time allowed for retries. The `--max-time` option still specifies the longest +time a single of these transfers is allowed to spend. Make curl retry up to 5 times, but no more than two minutes: @@ -44,7 +44,7 @@ retry: ## Retry on any and all errors The most aggressive form of retry is for the cases where you **know** that the -URL is supposed to work and you will not tolerate any failures. Using +URL is supposed to work and you do not tolerate any failures. Using `--retry-all-errors` makes curl treat all transfers failures as reason for retry. diff --git a/usingcurl/downloads/storing.md b/usingcurl/downloads/storing.md index 8ab53d56a0..b5ca9de327 100644 --- a/usingcurl/downloads/storing.md +++ b/usingcurl/downloads/storing.md @@ -1,10 +1,10 @@ # Storing downloads -If you try the example download as in the previous section, you will notice -that curl will output the downloaded data to stdout unless told to do -something else. Outputting data to stdout is really useful when you want to -pipe it into another program or similar, but it is not always the optimal way -to deal with your downloads. +If you try the example download as in the previous section, you might notice +that curl outputs the downloaded data to stdout unless told to do something +else. Outputting data to stdout is really useful when you want to pipe it into +another program or similar, but it is not always the optimal way to deal with +your downloads. Give curl a specific filename to save the download in with `-o [filename]` (with `--output` as the long version of the option), where filename is either @@ -34,20 +34,19 @@ follow. ## Overwriting When curl downloads a remote resource into a local filename as described -above, it will overwrite that file in case it already existed. It will -*clobber* it. +above, it overwrites that file in case it already existed. It *clobbers* it. curl offers a way to avoid this clobbering: `--no-clobber`. When using this option, and curl finds that there already exists a file with -the given name, curl instead appends a period plus a number to the filename -in an attempt to find a name that is not already used. It will start with `1` -and then continue trying until it reaches `100` and pick the first available -one. +the given name, curl instead appends a period plus a number to the filename in +an attempt to find a name that is not already used. It starts with `1` and +then continues trying numbers until it reaches `100` and picks the first +available one. For example, if you ask curl to download a URL to `picture.png`, and in that directory there already are two files called `picture.png` and -`picture.png.1`, the following will create save the file as `picture.png.2`: +`picture.png.1`, the following saves the file as `picture.png.2`: curl --no-clobber https://example.com/image -o picture.png @@ -58,7 +57,7 @@ used. ## Leftovers on errors By default, if curl runs into a problem during a download and exits with an -error, the partially transferred file will be left as-is. It could be a small +error, the partially transferred file is left as-is. It could be a small fraction of the intended file, or it could be almost the entire thing. It is up to the user to decide what to do with the leftovers. diff --git a/usingcurl/downloads/url-named.md b/usingcurl/downloads/url-named.md index 2f92e2c245..d2c5f3a0f1 100644 --- a/usingcurl/downloads/url-named.md +++ b/usingcurl/downloads/url-named.md @@ -15,7 +15,7 @@ name version. The -O option selects the local filename to use by picking the filename part of the URL that you provide. This is important. You specify the URL and curl picks the name from this data. If the site redirects curl further (and if you tell curl to follow redirects), it does not change the filename -curl will use for storing this. +curl uses for storing this. ## Use the URL's filename part for all URLs diff --git a/usingcurl/downloads/whatis.md b/usingcurl/downloads/whatis.md index 53256f2b90..fbcb60c5be 100644 --- a/usingcurl/downloads/whatis.md +++ b/usingcurl/downloads/whatis.md @@ -16,10 +16,9 @@ A request for a resource is protocol-specific so an `FTP://` URL works differently than an `HTTP://` URL or an `SFTP://` URL. A URL without a path part, that is a URL that has a host name part only (like -the `http://example.com` example above) will get a slash ('/') appended to it -internally and then that is the resource curl will ask for from the server. +the `http://example.com` example above) gets a slash ('/') appended to it +internally and then that is the resource curl asks for from the server. -If you specify multiple URLs on the command line, curl will download each URL -one by one. It will not start the second transfer until the first one is +If you specify multiple URLs on the command line, curl downloads each URL one +by one. It does not start the second transfer until the previous one is complete, etc. - diff --git a/usingcurl/ipfs.md b/usingcurl/ipfs.md index 6c49b40c70..a1f3b9cc5e 100644 --- a/usingcurl/ipfs.md +++ b/usingcurl/ipfs.md @@ -15,9 +15,9 @@ If you opt to go for a remote gateway you should be aware that you completely trust the gateway. This is fine in local gateways as you host it yourself. With remote gateways there could potentially be a malicious actor returning you data that does not match the request you made, inspect or even interfere -with the request. You will not notice this when getting IPFS using curl. +with the request. You might not notice this when getting IPFS using curl. -If the `--ipfs-gateway` option is not used, curl will check the `IPFS_GATEWAY` +If the `--ipfs-gateway` option is not used, curl checks the `IPFS_GATEWAY` environment variable for guidance and if not set, the `~/.ipfs/gateway` file that can be used to identify the gateway. diff --git a/usingcurl/netrc.md b/usingcurl/netrc.md index af2ede64f6..c74414d41c 100644 --- a/usingcurl/netrc.md +++ b/usingcurl/netrc.md @@ -1,6 +1,11 @@ # .netrc -Unix systems have for a long time offered a way for users to store their user name and password for remote FTP servers. ftp clients have supported this for decades and this way allowed users to quickly login to known servers without manually having to reenter the credentials each time. The `.netrc` file is typically stored in a user's home directory. (On Windows, curl will look for it with the name `_netrc`). +Unix systems have for a long time offered a way for users to store their user +name and password for remote FTP servers. ftp clients have supported this for +decades and this way allowed users to quickly login to known servers without +manually having to reenter the credentials each time. The `.netrc` file is +typically stored in a user's home directory. (On Windows, curl looks for it +with the name `_netrc`). This being a widespread and well used concept, curl also supports it—if you ask it to. curl does not, however, limit this feature to FTP, but can get credentials for machines for any protocol with this. See further below for how. @@ -28,11 +33,17 @@ The user name string for the remote machine. You cannot use a space in the name. **password string** -Supply a password. If this token is present, curl will supply the specified string if the remote server requires a password as part of the login process. Note that if this token is present in the .netrc file you really **should** make sure the file is not readable by anyone besides the user. You cannot use a space when you enter the password. +Supply a password. If this token is present, curl supplies the specified +string if the remote server requires a password as part of the login +process. Note that if this token is present in the .netrc file you really +**should** make sure the file is not readable by anyone besides the user. You +cannot use a space when you enter the password. **macdef name** -Define a macro. This is **not supported by curl**. In order for the rest of the `.netrc` to still work fine, curl will properly skip every definition done with `macdef` that it finds. +Define a macro. This is **not supported by curl**. In order for the rest of +the `.netrc` to still work fine, curl properly skips every definition done +with `macdef` that it finds. ## Example @@ -52,7 +63,8 @@ machine example.com login daniel password qwerty ## User name matching -When a URL is provided with a user name and .netrc is used, then curl will try to find the matching password for that machine and login combination. +When a URL is provided with a user name and .netrc is used, then curl tries to +find the matching password for that machine and login combination. ## Enable netrc diff --git a/usingcurl/persist.md b/usingcurl/persist.md index ec0ce806ce..ff5cbf2388 100644 --- a/usingcurl/persist.md +++ b/usingcurl/persist.md @@ -3,8 +3,8 @@ When setting up connections to sites, curl keeps old connections around for a while so that if the next transfer is done using the same host as a previous transfer, it can reuse the same connection again and thus save a lot of -time. We call this persistent connections. curl will always try to keep -connections alive and reuse existing connections as far as it can. +time. We call this persistent connections. curl always tries to keep +connections alive and reuses existing connections as far as it can. Connections are kept in the *connection pool*, sometimes also called the *connection cache*. diff --git a/usingcurl/proxies.md b/usingcurl/proxies.md index 78a2ac28ab..c96c987119 100644 --- a/usingcurl/proxies.md +++ b/usingcurl/proxies.md @@ -6,7 +6,7 @@ client. You can also see it as a middle man that sits between you and the server you want to work with, a middle man that you connect to instead of the actual remote server. You ask the proxy to perform your desired operation for you and -then it will run off and do that and then return the data to you. +then it runs off and do that and then it returns the data to you. There are several different types of proxies and we shall list and discuss them in subsections below. diff --git a/usingcurl/proxies/auth.md b/usingcurl/proxies/auth.md index 19aa7c9d41..c393229f65 100644 --- a/usingcurl/proxies/auth.md +++ b/usingcurl/proxies/auth.md @@ -1,8 +1,9 @@ # Proxy authentication HTTP and SOCKS proxies can require authentication, so curl then needs to -provide the proper credentials to the proxy to be allowed to use it, and -failing to do will only make the proxy return HTTP responses using code 407. +provide the proper credentials to the proxy to be allowed to use it. Failing +to do so (or providing the wrong credentials) makes the proxy return HTTP +responses using code 407. Authentication for proxies is similar to "normal" HTTP authentication. It is separate from the server authentication to allow clients to independently use @@ -13,12 +14,11 @@ with the `-U user:password` or `--proxy-user user:password` option: curl -U daniel:secr3t -x myproxy:80 http://example.com -This example will default to using the Basic authentication scheme. Some -proxies will require another authentication scheme (and the headers that are -returned when you get a 407 response will tell you which) and then you can ask -for a specific method with `--proxy-digest`, `--proxy-negotiate`, -`--proxy-ntlm`. The above example command again, but asking for NTLM auth with -the proxy: +This example defaults to using the Basic authentication scheme. Some proxies +requires other authentication schemes (and the headers that are returned when +you get a 407 response tells you which) and then you can ask for a specific +method with `--proxy-digest`, `--proxy-negotiate`, `--proxy-ntlm`. The above +example command again, but asking for NTLM auth with the proxy: curl -U daniel:secr3t -x myproxy:80 http://example.com --proxy-ntlm diff --git a/usingcurl/proxies/captive.md b/usingcurl/proxies/captive.md index f32e6cbb32..beba8af5ed 100644 --- a/usingcurl/proxies/captive.md +++ b/usingcurl/proxies/captive.md @@ -5,12 +5,12 @@ you want to access. A "captive portal" is one of these systems that are popular to use in hotels, airports and for other sorts of network access to a larger audience. The -portal will "capture" all network traffic and redirect you to a login web page +portal "captures" all network traffic and redirects you to a login web page until you have either clicked OK and verified that you have read their conditions or perhaps even made sure that you have paid plenty of money for the right to use the network. -curl's traffic will of course also captured by such portals and often the best +curl's traffic is of course also captured by such portals and often the best way is to use a browser to accept the conditions and "get rid of" the portal since from then on they often allow all other traffic originating from that same machine (MAC address) for a period of time. diff --git a/usingcurl/proxies/discover.md b/usingcurl/proxies/discover.md index a0e149754f..6c8a43d88f 100644 --- a/usingcurl/proxies/discover.md +++ b/usingcurl/proxies/discover.md @@ -8,8 +8,8 @@ your network for policy or technical reasons. In the networking space there are a few methods for the automatic detection of proxies and how to connect to them, but none of those methods are truly universal and curl supports none of them. Furthermore, when you communicate to -the outside world through a proxy that often means that you have to put a lot of -trust on the proxy as it will be able to see and modify all the non-secure +the outside world through a proxy that often means that you have to put a lot +of trust on the proxy as it is able to see and modify all the non-secure network traffic you send or get through it. That trust is not easy to assume automatically. diff --git a/usingcurl/proxies/env.md b/usingcurl/proxies/env.md index c8accb9cfa..4da24ce2fe 100644 --- a/usingcurl/proxies/env.md +++ b/usingcurl/proxies/env.md @@ -16,7 +16,7 @@ While the above example shows HTTP, you can, of course, also set `ftp_proxy`, `http_proxy` can also be specified in uppercase, like `HTTPS_PROXY`. To set a single variable that controls *all* protocols, the `ALL_PROXY` -exists. If a specific protocol variable one exists, such a one will take +exists. If a specific protocol variable one exists, such a one takes precedence. ## No proxy @@ -27,9 +27,9 @@ then done with the `NO_PROXY` variable. Set that to a comma- separated list of host names that should not use a proxy when being accessed. You can set `NO_PROXY` to be a single asterisk ('\*') to match all hosts. -If a name in the exclusion list starts with a dot (`.`), then the name will -match that entire domain. For example `.example.com` will match both -`www.example.com` and `home.example.com` but not `nonexample.com`. +If a name in the exclusion list starts with a dot (`.`), then the name matches +that entire domain. For example `.example.com` matches both `www.example.com` +and `home.example.com` but not `nonexample.com`. As an alternative to the `NO_PROXY` variable, there is also a `--noproxy` command line option that serves the same purpose and works the same way. @@ -49,9 +49,9 @@ environment variables for the script based on the incoming headers in the request. Those environment variables are prefixed with uppercase `HTTP_`. An incoming request to an HTTP server using a request header like `Proxy: -yada` will therefore create the environment variable `HTTP_PROXY` set to -contain `yada` before the CGI script is started. If such a CGI script runs -curl, it is important that curl does not treat that as a proxy to use. +yada` therefore creates the environment variable `HTTP_PROXY` set to contain +`yada` before the CGI script is started. If such a CGI script runs curl, it is +important that curl does not treat that as a proxy to use. Accepting the upper case version of this environment variable has been the source for many security problems in lots of software through times. diff --git a/usingcurl/proxies/headers.md b/usingcurl/proxies/headers.md index 9dd3565830..53fdf46fec 100644 --- a/usingcurl/proxies/headers.md +++ b/usingcurl/proxies/headers.md @@ -3,9 +3,9 @@ When you want to add HTTP headers meant specifically for an HTTP or HTTPS proxy, and not for the remote server, the `--header` option falls short. -For example, if you issue an HTTPS request through an HTTP proxy, it will be -done by first issuing a `CONNECT` to the proxy that establishes a tunnel to -the remote server and then it sends the request to that server. That first +For example, if you issue an HTTPS request through an HTTP proxy, it is done +by first issuing a `CONNECT` to the proxy that establishes a tunnel to the +remote server and then it sends the request to that server. That first `CONNECT` is only issued to the proxy and you may want to make sure only that receives your special header, and send another set of custom headers to the remote server. diff --git a/usingcurl/proxies/http.md b/usingcurl/proxies/http.md index b2e6bc3f29..b11355ad66 100644 --- a/usingcurl/proxies/http.md +++ b/usingcurl/proxies/http.md @@ -1,10 +1,10 @@ # HTTP proxy An HTTP proxy is a proxy that the client speaks HTTP with to get the transfer -done. curl will, by default, assume that a host you point out with `-x` or -`--proxy` is an HTTP proxy, and unless you also specify a port number it will -default to port 1080 (and the reason for that particular port number is purely -historical). +done. curl does by default, assume that a host you point out with `-x` or +`--proxy` is an HTTP proxy, and unless you also specify a port number it +defaults to port 1080 (and the reason for that particular port number is +purely historical). If you want to request the example.com web page using a proxy on 192.168.0.1 port 8080, a command line could look like: @@ -15,9 +15,9 @@ Recall that the proxy receives your request, forwards it to the real server, then reads the response from the server and then hands that back to the client. -If you enable verbose mode with `-v` when talking to a proxy, you will see -that curl connects to the proxy instead of the remote server, and you will see -that it uses a slightly different request line. +If you enable verbose mode with `-v` when talking to a proxy, it shows that +curl connects to the proxy instead of the remote server, and might see that it +uses a slightly different request line. ## HTTPS with HTTP proxy @@ -43,7 +43,7 @@ When talking FTP "over" an HTTP proxy, it is usually done by more or less pretending the other protocol works like HTTP and asking the proxy to "get this URL" even if the URL is not using HTTP. This distinction is important because it means that when sent over an HTTP proxy like this, curl does not -really speak FTP even though given an FTP URL; thus FTP-specific features will +really speak FTP even though given an FTP URL; thus FTP-specific features do not work: curl -x http://proxy.example.com:80 ftp://ftp.example.com/file.txt @@ -59,10 +59,10 @@ proxy. You tunnel through an HTTP proxy with curl using `-p` or `--proxytunnel`. When you do HTTPS through a proxy you normally connect through to the default -HTTPS remote TCP port number 443, so therefore you will find that most HTTP -proxies white list and allow connections only to hosts on that port number and -perhaps a few others. Most proxies will deny clients from connecting to just -any random port (for reasons only the proxy administrators know). +HTTPS remote TCP port number 443. Most HTTP proxies white list and allow +connections only to hosts on that port number and perhaps a few others. Most +proxies deny clients from connecting to just any random port (for reasons only +the proxy administrators know). Still, assuming that the HTTP proxy allows it, you can ask it to tunnel through to a remote server on any port number so you can do other protocols diff --git a/usingcurl/proxies/https.md b/usingcurl/proxies/https.md index b7efc1dddd..7a9783fe20 100644 --- a/usingcurl/proxies/https.md +++ b/usingcurl/proxies/https.md @@ -10,8 +10,7 @@ One solution for that is to use an HTTPS proxy, speaking HTTPS to the proxy, which then establishes a secure and encrypted connection that is safe from easy surveillance. -When an HTTPS proxy is specified, the default port used on that host will be -443. +When an HTTPS proxy is specified, the default port used on that host is 443. In most other ways, HTTPS proxies work like [HTTP proxies](http.md). diff --git a/usingcurl/reademail.md b/usingcurl/reademail.md index 417aaab7e1..02ba42e72e 100644 --- a/usingcurl/reademail.md +++ b/usingcurl/reademail.md @@ -55,8 +55,8 @@ or "Implicit" SSL means that the connection gets secured already at first connect, which you make curl attempt by specifying a scheme in the URL that uses SSL. In this case either `pop3s://` or `imaps://`. For such connections, -curl will insist on connecting and negotiating a TLS connection already from -the start, or it will fail its operation. +curl insists on connecting and negotiating a TLS connection already from the +start, or it fails its operation. The previous explicit examples done with implicit SSL: diff --git a/usingcurl/returns.md b/usingcurl/returns.md index f36df90b95..08780521b0 100644 --- a/usingcurl/returns.md +++ b/usingcurl/returns.md @@ -1,14 +1,14 @@ # Exit status A lot of effort has gone into the project to make curl return a usable exit -code when something goes wrong and it will always return 0 (zero) when the +code when something goes wrong and it always returns 0 (zero) when the operation went as planned. If you write a shell script or batch file that invokes curl, you can always check the return code to detect problems in the invoked command. Below, you -will find a list of return codes as of the time of this writing. Over time we -tend to slowly add new ones so if you get a code back not listed here, please -refer to more updated curl documentation for aid. +find a list of return codes as of the time of this writing. Over time we tend +to slowly add new ones so if you get a code back not listed here, please refer +to more updated curl documentation for aid. A basic Unix shell script could look like something like this: @@ -64,9 +64,9 @@ A basic Unix shell script could look like something like this: 8. Unknown FTP server response. The server sent data curl could not parse. This is either because of a bug in curl, a bug in the server or because the server is using an FTP protocol extension that curl does not - support. The only real work-around for this is to tweak curl options to - try it to use other FTP commands that perhaps will not get this unknown - server response back. + support. The only real work-around for this is to tweak curl options to try + it to use other FTP commands that perhaps do not get this unknown server + response back. 9. FTP access denied. The server denied login or denied access to the particular resource or directory you wanted to reach. Most often you tried @@ -115,11 +115,11 @@ A basic Unix shell script could look like something like this: before it is started as otherwise the transfer cannot work. 18. Partial file. Only a part of the file was transferred. When the transfer - is considered complete, curl will verify that it actually received the - same amount of data that it was told before-hand that it was going to - get. If the two numbers do not match, this is the error code. It could mean - that curl got fewer bytes than advertised or that it got more. curl itself - cannot know which number that is wrong or which is correct. If any. + is considered complete, curl verifies that it actually received the same + amount of data that it was told before-hand that it was going to get. If the + two numbers do not match, this is the error code. It could mean that curl + got fewer bytes than advertised or that it got more. curl itself cannot know + which number that is wrong or which is correct. If any. 19. FTP could not download/access the given file. The RETR (or similar) command failed. curl got an error from the server when trying to download @@ -151,9 +151,9 @@ A basic Unix shell script could look like something like this: constraints. This error can happen for many protocols. 26. Read error. Various reading problems. The inverse to exit status 23. When - curl sends data to a server, it reads data chunk by chunk from a local - file or stdin or similar, and if that reading fails in some way this is - the exit status curl will return. + curl sends data to a server, it reads data chunk by chunk from a local file + or stdin or similar, and if that reading fails in some way this is the exit + status curl returns. 27. Out of memory. A memory allocation request failed. curl needed to allocate more memory than what the system was willing to give it and curl @@ -228,9 +228,9 @@ A basic Unix shell script could look like something like this: 44. **Not used** 45. Interface error. A specified outgoing network interface could not be - used. curl will typically decide outgoing network and IP addresses by itself - but when explicitly asked to use a specific one that curl cannot use, this - error can occur. + used. curl typically decides outgoing network and IP addresses by itself but + when explicitly asked to use a specific one that curl cannot use, this error + can occur. 46. **Not used** @@ -260,8 +260,8 @@ A basic Unix shell script could look like something like this: details. 52. The server did not reply anything, which in this context is considered an - error. When an HTTP(S) server responds to an HTTP(S) request, it will always - return *something* as long as it is alive and sound. All valid HTTP + error. When an HTTP(S) server responds to an HTTP(S) request, it always + returns *something* as long as it is alive and sound. All valid HTTP responses have a status line and responses header. Not getting anything at all back is an indication the server is faulty or perhaps that something prevented curl from reaching the right server or that you are trying to @@ -373,7 +373,7 @@ A basic Unix shell script could look like something like this: 88. FTP chunk callback reported error - 89. No connection available, the session will be queued + 89. No connection available, the session is queued 90. SSL public key does not matched pinned public key. Either you provided a bad public key, or the server has changed. @@ -405,10 +405,10 @@ A basic Unix shell script could look like something like this: ## Error message -When curl exits with a non-zero code, it will also output an error message -(unless `--silent` is used). That error message may add some additional -information or circumstances to the exit status number itself so the same error -number can get different error messages. +When curl exits with a non-zero code, it also outputs an error message (unless +`--silent` is used). That error message may add some additional information or +circumstances to the exit status number itself so the same error number can +get different error messages. ## "Not used" @@ -417,6 +417,5 @@ used'. Those are exit status codes that are not used in modern versions of curl but that have been used or were intended to be used in the past. They may be used in a future version of curl. -Additionally, the highest used error status in this list is 92, but there is -no guarantee that a future curl version will not add more exit codes after -that number. +Additionally, the highest used error status in this list is 99, but future +curl versions might have added more exit codes after that number. diff --git a/usingcurl/scpsftp.md b/usingcurl/scpsftp.md index 66f3799d60..804950c239 100644 --- a/usingcurl/scpsftp.md +++ b/usingcurl/scpsftp.md @@ -31,8 +31,8 @@ a trailing slash: curl sftp://example.com/ -u user Note that both these protocols work with "users" and you do not ask for a file -anonymously or with a standard generic name. Most systems will require that -users authenticate, as outlined below. +anonymously or with a standard generic name. Most systems require that users +authenticate, as outlined below. When requesting a file from an SFTP or SCP URL, the file path given is considered to be the absolute path on the remote server unless you @@ -55,17 +55,17 @@ SFTP URL) is done like this: 2. curl then tries the offered methods one by one until one works or they all failed -curl will attempt to use your public key as found in the `.ssh` subdirectory -in your home directory if the server offers public key authentication. When -doing do, you still need to tell curl which user name to use on the -server. For example, the user 'john' lists the entries in his home directory -on the remote SFTP server called 'sftp.example.com': +curl attempts to use your public key as found in the `.ssh` subdirectory in +your home directory if the server offers public key authentication. When doing +do, you still need to tell curl which user name to use on the server. For +example, the user 'john' lists the entries in his home directory on the remote +SFTP server called 'sftp.example.com': curl -u john: sftp://sftp.example.com/ -If curl cannot authenticate with the public key for any reason, it will -instead attempt to use the user name + password if the server allows it and -the credentials are passed on the command line. +If curl cannot authenticate with the public key for any reason, it instead +attempts to use the user name + password if the server allows it and the +credentials are passed on the command line. For example, the same user from above has the password `RHvxC6wUA` on a remote system and can download a file via SCP like this: @@ -88,8 +88,8 @@ that the client stores the hashes for known servers is often called `known_hosts` and is put in a dedicated SSH directory. On Linux systems that is usually called `~/.ssh`. -When curl connects to a SFTP and SCP host, it will make sure that the host's -key hash is already present in the known hosts file or it will deny continued +When curl connects to a SFTP and SCP host, it makes sure that the host's key +hash is already present in the known hosts file or it denies continued operation because it cannot trust that the server is the right one. Once the correct hash exists in `known_hosts` curl can perform transfers. diff --git a/usingcurl/smtp.md b/usingcurl/smtp.md index 9326911f1a..9deb6a2f23 100644 --- a/usingcurl/smtp.md +++ b/usingcurl/smtp.md @@ -12,8 +12,8 @@ When sending SMTP with curl, there are two necessary command line options that **must** be used. - You need to tell the server at least one recipient with `--mail-rcpt`. You - can use this option several times and then curl will tell the server that - all those email addresses should receive the email. + can use this option several times and then curl tells the server that all + those email addresses should receive the email. - You need to tell the server which email address that is the sender of the email with `--mail-from`. It is important to realize that this email @@ -74,12 +74,12 @@ You can tell curl to _require_ upgrading to using secure transfers by adding ## The SMTP URL The path part of a SMTP request specifies the host name to present during -communication with the mail server. If the path is omitted then curl will -attempt to figure out the local computer's host name and use that. However, -this may not return the fully qualified domain name that is required by some -mail servers and specifying this path allows you to set an alternative name, -such as your machine's fully qualified domain name, which you might have -obtained from an external function such as gethostname or getaddrinfo. +communication with the mail server. If the path is omitted then curl attempts +to figure out the local computer's host name and use that. However, this may +not return the fully qualified domain name that is required by some mail +servers and specifying this path allows you to set an alternative name, such +as your machine's fully qualified domain name, which you might have obtained +from an external function such as gethostname or getaddrinfo. To connect to the mail server at `mail.example.com` and send your local computer's host name in the HELO / EHLO command: @@ -96,11 +96,10 @@ to the mail server at `mail.example.com`, use: ## No MX lookup! -When you send email with an ordinary mail client, it will first check for an -MX record for the particular domain you want to send email to. If you send an -email to `joe@example.com`, the client will get the MX records for -`example.com` to learn which mail server(s) to use when sending email to -example.com users. +When you send email with an ordinary mail client, it first checks for an MX +record for the particular domain you want to send email to. If you send an +email to `joe@example.com`, the client gets the MX records for `example.com` +to learn which mail server(s) to use when sending email to example.com users. curl does no MX lookups by itself. If you want to figure out which server to send an email to for a particular domain, we recommend you figure that out diff --git a/usingcurl/telnet.md b/usingcurl/telnet.md index 7d2c768cf7..dc277ecc27 100644 --- a/usingcurl/telnet.md +++ b/usingcurl/telnet.md @@ -32,7 +32,7 @@ request to it by manually entering `GET /` and press return twice: curl telnet://localhost:80 -Your web server will most probably return something like this back: +Your web server most probably returns something like this back: HTTP/1.1 400 Bad Request Date: Tue, 07 Dec 2021 07:41:16 GMT diff --git a/usingcurl/timeouts.md b/usingcurl/timeouts.md index 6a4e83383c..52a041861e 100644 --- a/usingcurl/timeouts.md +++ b/usingcurl/timeouts.md @@ -16,9 +16,9 @@ time. Further, most operations in curl have no time-out by default! Tell curl with `-m / --max-time` the maximum time, in seconds, that you allow the command line to spend before curl exits with a timeout error code -(28). When the set time has elapsed, curl will exit no matter what is going -on at that moment—including if it is transferring data. It really is the -maximum time allowed. +(28). When the set time has elapsed, curl exits no matter what is going on at +that moment—including if it is transferring data. It really is the maximum +time allowed. The given maximum time can be specified with a decimal precision; `0.5` means 500 milliseconds and `2.37` equals 2370 milliseconds. @@ -32,10 +32,10 @@ fractions.) ## Never spend more than this to connect -`--connect-timeout` limits the time curl will spend trying to connect to the +`--connect-timeout` limits the time curl spends trying to connect to the host. All the necessary steps done before the connection is considered complete have to be completed within the given time frame. Failing to connect -within the given time will cause curl to exit with a timeout exit code (28). +within the given time causes curl to exit with a timeout exit code (28). The steps done before a connect is considered successful include DNS lookup and subsequent TCP, TLS or QUIC handshakes. diff --git a/usingcurl/tls/backends.md b/usingcurl/tls/backends.md index 8ea2e1dc01..083c8f2253 100644 --- a/usingcurl/tls/backends.md +++ b/usingcurl/tls/backends.md @@ -10,8 +10,8 @@ Sometimes features and behaviors differ slightly when curl is built with different TLS backends, but the developers work hard on making those differences as small and unnoticeable as possible. -Showing the curl version information with [curl --version](../version.md) will -always include the TLS library and version in the first line of output. +Showing the curl version information with [curl --version](../version.md) +includes the TLS library and version in the first line of output. ## Multiple TLS backends @@ -19,10 +19,9 @@ When curl is built with *multiple* TLS backends, it can be told which one to use each time it is started. It is always built to use a specific one by default unless one is asked for. -If you invoke `curl --version` for a curl with multiple backends it will -mention `MultiSSL` as a feature in the last line. The first line will then -include all the supported TLS backends with the non-default ones within -parentheses. +If you invoke `curl --version` for a curl with multiple backends it mentions +`MultiSSL` as a feature in the last line. The first line includes all the +supported TLS backends with the non-default ones within parentheses. To set a specific one to get used, set the environment variable `CURL_SSL_BACKEND` to its name. diff --git a/usingcurl/tls/enable.md b/usingcurl/tls/enable.md index 81065d21cb..ef53b343ff 100644 --- a/usingcurl/tls/enable.md +++ b/usingcurl/tls/enable.md @@ -13,7 +13,7 @@ speak TLS already from the first connection handshake while the other is to instructions. With curl, if you explicitly specify the TLS version of the protocol (the one -that has a name that ends with an 'S' character) in the URL, curl will try to +that has a name that ends with an 'S' character) in the URL, curl tries to connect with TLS from start, while if you specify the non-TLS version in the URL you can _usually_ upgrade the connection to TLS-based with the `--ssl` option. @@ -30,11 +30,11 @@ The support table looks like this: | SMTP | SMTPS | **yes** | The protocols that _can_ do `--ssl` all favor that method. Using `--ssl` means -that curl will *attempt* to upgrade the connection to TLS but if that fails, -it will still continue with the transfer using the plain-text version of the +that curl *attempts* to upgrade the connection to TLS but if that fails, it +still continues with the transfer using the plain-text version of the protocol. To make the `--ssl` option **require** TLS to continue, there is -instead the `--ssl-reqd` option which will make the transfer fail if curl -cannot successfully negotiate TLS. +instead the `--ssl-reqd` option which makes the transfer fail if curl cannot +successfully negotiate TLS. Require TLS security for your FTP transfer: @@ -44,8 +44,8 @@ Suggest TLS to be used for your FTP transfer: curl --ssl ftp://ftp.example.com/file.txt -Connecting directly with TLS (to HTTPS://, LDAPS://, FTPS:// etc) means that -TLS is mandatory and curl will return an error if TLS is not negotiated. +Connecting directly with TLS (to `HTTPS://`, `LDAPS://`, `FTPS://` etc) means that +TLS is mandatory and curl returns an error if TLS is not negotiated. Get a file over HTTPS: diff --git a/usingcurl/tls/pinning.md b/usingcurl/tls/pinning.md index 9b533f3a7f..f32501dc13 100644 --- a/usingcurl/tls/pinning.md +++ b/usingcurl/tls/pinning.md @@ -5,8 +5,8 @@ the servers certificate has not changed. It is "pinned". When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and -if it does not exactly match the public key provided to this option, curl will -abort the connection before sending or receiving any data. +if it does not exactly match the public key provided to this option, curl +aborts the connection before sending or receiving any data. You tell curl a filename to read the sha256 value from, or you specify the base64 encoded hash directly in the command line with a `sha256//` prefix. You diff --git a/usingcurl/tls/stapling.md b/usingcurl/tls/stapling.md index bea98547e8..781cc22708 100644 --- a/usingcurl/tls/stapling.md +++ b/usingcurl/tls/stapling.md @@ -5,8 +5,8 @@ server to provide a fresh "proof" from the CA in the handshake, that the certificate that it returns is still valid. This is a way to make really sure the server's certificate has not been revoked. -If the server does not support this extension, the test will fail and curl -returns an error. It is still common that servers do not support this. +If the server does not support this extension, the test fails and curl returns +an error. It is still common that servers do not support this. Ask for the handshake to use the status request like this: diff --git a/usingcurl/tls/verify.md b/usingcurl/tls/verify.md index e26b73e3c0..8e5e1baa67 100644 --- a/usingcurl/tls/verify.md +++ b/usingcurl/tls/verify.md @@ -27,9 +27,9 @@ curl needs a "CA store", a collection of CA certificates, to verify the TLS server it talks to. If curl is built to use a TLS library that is "native" to your platform, -chances are that library will use the native CA store as well. If not, curl -has to either have been built to know where the local CA store is, or users -need to provide a path to the CA store when curl is invoked. +chances are that library uses the native CA store as well. If not, curl has to +either have been built to know where the local CA store is, or users need to +provide a path to the CA store when curl is invoked. You can point out a specific CA bundle to use in the TLS handshake with the `--cacert` command line option. That bundle needs to be in PEM format. You can @@ -40,7 +40,7 @@ also set the environment variable `CURL_CA_BUNDLE` to the full path. curl built on windows that is not using the native TLS library (Schannel), have an extra sequence for how the CA store can be found and used. -curl will search for a CA cert file named "curl-ca-bundle.crt" in these +curl searches for a CA cert file named `curl-ca-bundle.crt` in these directories and in this order: 1. application's directory diff --git a/usingcurl/tls/versions.md b/usingcurl/tls/versions.md index 2f52d21661..719450c07f 100644 --- a/usingcurl/tls/versions.md +++ b/usingcurl/tls/versions.md @@ -15,7 +15,7 @@ however so new that a lot of software, tools and libraries do not yet support it. curl is designed to use a "safe version" of SSL/TLS by default. It means that -it will not negotiate SSLv2 or SSLv3 unless specifically told to, and in fact +it does not negotiate SSLv2 or SSLv3 unless specifically told to, and in fact several TLS libraries no longer provide support for those protocols so in many cases curl is not even able to speak those protocol versions unless you make a serious effort. diff --git a/usingcurl/transfers/rate-limiting.md b/usingcurl/transfers/rate-limiting.md index c99872e9d0..cb63692e55 100644 --- a/usingcurl/transfers/rate-limiting.md +++ b/usingcurl/transfers/rate-limiting.md @@ -1,30 +1,29 @@ # Rate limiting -When curl transfers data, it will attempt to do that as fast as possible. It -goes for both uploads and downloads. Exactly how fast that will be depends on -several factors, including your computer's ability, your own network -connection's bandwidth, the load on the remote server you are transferring -to/from and the latency to that server. Your curl transfers are also likely to -compete with other transfers on the networks the data travels over, from other -users or just other apps by the same user. - -In many setups, however, you will find that you can more or less saturate your -own network connection with a single curl command line. If you have a 10 -megabit per second connection to the Internet, chances are curl can use all of -those 10 megabits to transfer data. +When curl transfers data, it attempts to do that as fast as possible. It goes +for both uploads and downloads. Exactly how fast that goes depends on several +factors, including your computer's ability, your own network connection's +bandwidth, the load on the remote server you are transferring to/from and the +latency to that server. Your curl transfers are also likely to compete with +other transfers on the networks the data travels over, from other users or +just other apps by the same user. + +In many setups, however, you can more or less saturate your own network +connection with a single curl command line. If you have a 10 megabit per +second connection to the Internet, chances are curl can use all of those 10 +megabits to transfer data. For most use cases, using as much bandwidth as possible is a good thing. It makes the transfer faster, it makes the curl command complete sooner and it -will make the transfer use resources from the server for a shorter period -of time. +makes the transfer use resources from the server for a shorter period of time. -Sometimes you will, however, find that having curl starve out other -network functions on your local network connection is inconvenient. In these -situations you may want to tell curl to slow down so that other network users -get a better chance to get their data through as well. With `--limit-rate -[speed]` you can tell curl to not go faster than the given number of bytes per -second. The rate limit value can be given with a letter suffix using one of K, -M and G for kilobytes, megabytes and gigabytes. +Sometimes, having curl starve out other network functions on your local +network connection is inconvenient. In these situations you may want to tell +curl to slow down so that other network users get a better chance to get their +data through as well. With `--limit-rate [speed]` you can tell curl to not go +faster than the given number of bytes per second. The rate limit value can be +given with a letter suffix using one of K, M and G for kilobytes, megabytes +and gigabytes. To make curl not download data any faster than 200 kilobytes per second: @@ -34,6 +33,6 @@ The given limit is the maximum *average speed* allowed during a period of several seconds. It means that curl might use higher transfer speeds in short bursts, but over time it averages to no more than the given rate. -curl does not know what the maximum possible speed is — it will simply go as -fast as it can and is allowed. You might know your connection's maximum speed, -curl does not. +curl does not know what the maximum possible speed is — it simply goes as fast +as it can and is allowed. You might know your connection's maximum speed, curl +does not. diff --git a/usingcurl/transfers/request-rate.md b/usingcurl/transfers/request-rate.md index 4625261bcf..0d686210a5 100644 --- a/usingcurl/transfers/request-rate.md +++ b/usingcurl/transfers/request-rate.md @@ -6,12 +6,12 @@ slower than as fast as possible. We call that *request rate limiting*. With the `--rate` option, you specify the maximum transfer frequency you allow curl to use - in number of transfer starts per time unit (sometimes called -request rate). Without this option, curl will start the next transfer as fast -as possible. +request rate). Without this option, curl starts the next transfer as fast as +possible. If given several URLs and a transfer completes faster than the allowed rate, -curl will delay starting the next transfer to maintain the requested -rate. This option is for serial transfers and has no effect when +curl delays starting the next transfer to maintain the requested rate. This +option is for serial transfers and has no effect when [--parallel](../../cmdline/urls/parallel.md) is used. The request rate is provided as **N/U** where N is an integer number and U is @@ -19,16 +19,16 @@ a time unit. Supported units are `s` (second), `m` (minute), `h` (hour) and `d` (day, as in a 24 hour unit). The default time unit, if no **/U** is provided, is number of transfers per hour. -If curl is told to allow 10 requests per minute, it will not start the next +If curl is told to allow 10 requests per minute, it does not start the next request until 6 seconds have elapsed since the previous transfer was started. This function uses millisecond resolution. If the allowed frequency is set -more than 1000 per second, it will instead run unrestricted. +more than 1000 per second, it instead runs unrestricted. When retrying transfers, enabled with [--retry](../downloads/retry.md), the separate retry delay logic is used and not this setting. -If this option is used several times, the last one will be used. +If this option is used several times, the last one is used. ## Examples diff --git a/usingcurl/uploads.md b/usingcurl/uploads.md index a56e9d4c45..df75be3019 100644 --- a/usingcurl/uploads.md +++ b/usingcurl/uploads.md @@ -59,11 +59,11 @@ You send off an HTTP upload using the -T option with the file to upload: ## FTP uploads -Working with FTP, you get to see the remote file system you will be accessing. +Working with FTP, you get to see the remote file system you are accessing. You tell the server exactly in which directory you want the upload to be placed and which filename to use. If you specify the upload URL with a -trailing slash, curl will append the locally used filename to the URL and -then that will be the filename used when stored remotely: +trailing slash, curl appends the locally used filename to the URL and then +that becomes the filename used when stored remotely: curl -T uploadthis ftp://example.com/this/directory/ @@ -78,8 +78,8 @@ Learn much more about FTPing in the [FTP with curl](../ftp.md) section. You may not consider sending an email to be "uploading", but to curl it is. You upload the mail body to the SMTP server. With SMTP, you also need to -include all the mail headers you need (To:, From:, Date:, etc.) in the mail -body as curl will not add any at all. +include all the mail headers you need (`To:`, `From:`, `Date:`, etc.) in the mail +body as curl does not add any at all. curl -T mail smtp://mail.example.com/ --mail-from user@example.com diff --git a/usingcurl/verbose.md b/usingcurl/verbose.md index 22edd2bee6..aeba14a43e 100644 --- a/usingcurl/verbose.md +++ b/usingcurl/verbose.md @@ -4,9 +4,9 @@ If your curl command does not execute or return what you expected it to, your first gut reaction should always be to run the command with the `-v / --verbose` option to get more information. -When verbose mode is enabled, curl gets more talkative and will explain and -show a lot more of its doings. It will add informational tests and prefix them -with '\*'. For example, let's see what curl might say when trying a simple HTTP +When verbose mode is enabled, curl gets more talkative and explains and shows +a lot more of its doings. It adds informational tests and prefix them with +'\*'. For example, let's see what curl might say when trying a simple HTTP example (saving the downloaded data in the file called 'saved'): $ curl -v http://example.com -o saved @@ -19,7 +19,7 @@ and it adds a trailing slash before it moves on. This tells us curl now tries to connect to this IP address. It means the name 'example.com' has been resolved to one or more addresses and this is the first -(and possibly only) address curl will try to connect to. +(and possibly only) address curl tries to connect to. * Connected to example.com (93.184.216.34) port 80 (#0) @@ -30,18 +30,18 @@ in the same command line you can see it use more connections or reuse connections, so the connection counter may increase or not increase depending on what curl decides it needs to do. -If we use an HTTPS:// URL instead of an HTTP one, there will also be a whole +If we use an `HTTPS://` URL instead of an HTTP one, there are also a whole bunch of lines explaining how curl uses CA certs to verify the server's certificate and some details from the server's certificate, etc. Including which ciphers were selected and more TLS details. -In addition to the added information given from curl internals, the -v verbose -mode will also make curl show all headers it sends and receives. For protocols -without headers (like FTP, SMTP, POP3 and so on), we can consider commands and -responses as headers and they will thus also be shown with -v. +In addition to the added information given from curl internals, the `-v` +verbose mode also makes curl show all headers it sends and receives. For +protocols without headers (like FTP, SMTP, POP3 and so on), we can consider +commands and responses as headers and they thus also are shown with `-v`. If we then continue the output seen from the command above (but ignore the -actual HTML response), curl will show: +actual HTML response), curl shows: > GET / HTTP/1.1 > Host: example.com @@ -49,9 +49,9 @@ actual HTML response), curl will show: > Accept: */* > -This is the full HTTP request to the site. This request is how it looks -in a default curl 7.45.0 installation and it may, of course, differ slightly -between different releases and in particular it will change if you add command +This is the full HTTP request to the site. This request is how it looks in a +default curl 7.45.0 installation and it may, of course, differ slightly +between different releases and in particular it changes if you add command line options. The last line of the HTTP request headers looks empty, and it is. It @@ -59,8 +59,8 @@ signals the separation between the headers and the body, and in this request there is no "body" to send. Moving on and assuming everything goes according to plan, the sent request -will get a corresponding response from the server and that HTTP response will -start with a set of headers before the response body: +gets a corresponding response from the server and that HTTP response starts +with a set of headers before the response body: < HTTP/1.1 200 OK < Accept-Ranges: bytes @@ -91,22 +91,22 @@ regular -v verbose mode does not show that data but only displays That 1270 bytes should then be in the 'saved' file. You can also see that there was a header named Content-Length: in the response that contained the -exact file length (it will not always be present in responses). +exact file length (it is not always be present in responses). ## HTTP/2 and HTTP/3 When doing file transfers using version two or three of the HTTP protocol, curl sends and receives **compressed** headers. To display outgoing and incoming HTTP/2 and HTTP/3 headers in a readable and understandable way, curl -will actually show the uncompressed versions in a style similar to how they -appear with HTTP/1.1. +shows the uncompressed versions in a style similar to how they appear with +HTTP/1.1. ## Silence The opposite of verbose is, of course, to make curl more silent. With the `-s` (or `--silent`) option you make curl switch off the progress meter and not -output any error messages for when errors occur. It gets mute. It will still -output the downloaded data you ask it to. +output any error messages for when errors occur. It gets mute. It still +outputs the downloaded data you ask it to. With silence activated, you can ask for it to still output the error message on failures by adding `-S` or `--show-error`. diff --git a/usingcurl/verbose/trace.md b/usingcurl/verbose/trace.md index d5f59dbff5..c73b3315d2 100644 --- a/usingcurl/verbose/trace.md +++ b/usingcurl/verbose/trace.md @@ -5,13 +5,13 @@ the complete stream including the actual transferred data. For situations when curl does encrypted file transfers with protocols such as HTTPS, FTPS or SFTP, other network monitoring tools (like Wireshark or -tcpdump) will not be able to do this job as easily for you. +tcpdump) are not able to do this job as easily for you. For this, curl offers two other options that you use instead of `-v`. -`--trace [filename]` will save a full trace in the given filename. You can -also use '-' (a single minus) instead of a filename to get it passed to -stdout. You would use it like this: +`--trace [filename]` saves a full trace in the given filename. You can also +use '-' (a single minus) instead of a filename to get it passed to stdout. You +would use it like this: $ curl --trace dump http://example.com @@ -35,7 +35,7 @@ this case, the 15 first lines of the dump file looks like: 0010: 79 74 65 73 0d 0a ytes.. Every single sent and received byte get displayed individually in hexadecimal -numbers. Received headers will be output line by line. +numbers. Received headers are output line by line. If you think the hexadecimals are not helping, you can try `--trace-ascii [filename]` instead, also this accepting '-' for stdout and that makes the 15 @@ -91,7 +91,7 @@ use a large number of separate connections and different transfers, there are times when you want to see to which specific transfers or connections the various information below to. To better understand the trace output. -You can then add `--trace-ids` to the line and you will see how curl adds two +You can then add `--trace-ids` to the line and you see how curl adds two numbers to all tracing: the connection number and the transfer number. They are two separate identifiers because connections can be reused and multiple transfers can use the same connection. @@ -113,7 +113,7 @@ identifiers and show me HTTP/2 details: curl --trace-config ids,http/2 https://example.com -The exact set of ares will vary, but here are some ones to try: +The exact set of options varies, but here are some ones to try: | area | description | |----------|-------------------------------------------------| diff --git a/usingcurl/verbose/writeout.md b/usingcurl/verbose/writeout.md index 450413288f..fe2850da50 100644 --- a/usingcurl/verbose/writeout.md +++ b/usingcurl/verbose/writeout.md @@ -22,7 +22,10 @@ curl -w @- http://example.com/ ## Variables -The variables that are available are accessed by writing `%{variable_name}` in the string and that variable will then be substituted by the correct value. To output a plain `%` you write it as `%%`. You can also output a newline by using , a carriage return with and a tab space with . +The variables that are available are accessed by writing `%{variable_name}` in +the string and that variable is substituted by the correct value. To output a +plain `%` you write it as `%%`. You can also output a newline by using , a +carriage return with and a tab space with . As an example, we can output the Content-Type and the response code from an HTTP transfer, separated with newlines and some extra text like this: @@ -46,7 +49,10 @@ curl -w "Server: %header{server}\n" http://example.com By default, this option makes the selected data get output on stdout. If that is not good enough, the pseudo-variable `%{stderr}` can be used to direct (the following) part to stderr and `%{stdout}` brings it back to stdout. -From curl 8.3.0, there is a feature that lets users send the write-out output to a file: `%output{filename}`. The data following will then be written to that file. If you rather have curl append to that file instead of creating it from scratch, prefix the file name with `>>`. Like this: `%output{>>filename}`. +From curl 8.3.0, there is a feature that lets users send the write-out output +to a file: `%output{filename}`. The data following is then written to that +file. If you rather have curl append to that file instead of creating it from +scratch, prefix the filename with `>>`. Like this: `%output{>>filename}`. A write-out argument can include output to stderr, stdout and files as the user sees fit. @@ -99,7 +105,7 @@ Some of these variables are not available in really old curl versions. | `time_pretransfer` | The time in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved. | | `time_redirect` | The time in seconds, it took for all redirection steps including name lookup, connect, pre-transfer and transfer before the final transaction was started. time\_redirect the complete execution time for multiple redirections. | | `time_starttransfer` | The time in seconds, it took from the start until the first byte was just about to be transferred. This includes time\_pretransfer and also the time the server needed to calculate the result. | -| `time_total` | The total time in seconds, that the full operation lasted. The time will be displayed with millisecond resolution. | +| `time_total` | The total time in seconds, that the full operation lasted. The time is displayed with millisecond resolution. | | `url` | The URL used in the transfer. (Introduced in 7.75.0) | | `url_effective` | The URL that was fetched last. This is particularly meaningful if you have told curl to follow Location: headers (with `-L`). | | `urlnum` | 0-based numerical index of the URL used in the transfer. (Introduced in 7.75.0) | diff --git a/usingcurl/version.md b/usingcurl/version.md index 1637157ad8..7b45840a6c 100644 --- a/usingcurl/version.md +++ b/usingcurl/version.md @@ -9,7 +9,7 @@ or use the shorthand version: curl -V The output from that command line is typically four lines, out of which some -will be rather long and might wrap in your terminal window. +are rather long and might wrap in your terminal window. An example output from a Debian Linux in June 2020: @@ -50,9 +50,9 @@ more than one TLS library which then makes curl - at start-up - select which particular backend to use for this invoke. If curl supports more than one TLS library like this, the ones that are *not* -selected by default will be listed within parentheses. Thus, if you do not -specify which backend to use use (with the `CURL_SSL_BACKEND` environment -variable) the one listed without parentheses will be used. +selected by default are listed within parentheses. Thus, if you do not specify +which backend to use use (with the `CURL_SSL_BACKEND` environment variable) +the one listed without parentheses is used. ## Line 2: Release-Date @@ -105,7 +105,7 @@ Features that can be present there: - **Metalink** - This curl supports Metalink. In modern curl versions this option is never available. - **MultiSSL** - This curl supports multiple TLS backends. The first line - will detail exactly which TLS libraries. + details exactly which TLS libraries. - **NTLM** - NTLM authentication is supported. - **NTLM_WB** - NTLM authentication is supported. - **PSL** - Public Suffix List (PSL) is available and means that this curl has