From e6c6548a66f44f184144d3433a5cb43d739c5121 Mon Sep 17 00:00:00 2001 From: Daniel Stenberg Date: Sat, 13 Jan 2024 18:30:35 +0100 Subject: [PATCH] reduce use of double quotes - mostly by just removing them - in some instances use italics instead - in some cases rephrase --- build/autotools.md | 6 +++--- build/tls.md | 2 +- cmdline/urls/auth.md | 2 +- cmdline/urls/fragment.md | 4 ++-- ftp.md | 10 +++++----- ftp/cmds.md | 9 ++++----- ftp/twoconnections.md | 7 ++++--- get/linux.md | 6 +++--- get/win-vcpkg.md | 2 +- http/altsvc.md | 2 +- http/auth.md | 2 +- http/conditionals.md | 8 ++++---- http/cookies.md | 13 +++++++------ http/https.md | 2 +- http/modify/fragment.md | 4 ++-- http/multipart.md | 22 +++++++++++----------- http/postvspost.md | 8 ++++---- http/response.md | 6 +++--- http/versions/http2.md | 6 +++--- internals/backends.md | 4 ++-- internals/caches.md | 2 +- internals/handler.md | 12 ++++++------ internals/memory-debugging.md | 4 ++-- internals/multi.md | 5 ++--- internals/structs.md | 16 ++++++++-------- internals/tests/ci.md | 2 +- libcurl-http/alt-svc.md | 4 ++-- libcurl-http/cookies.md | 17 +++++++++-------- libcurl-http/download.md | 2 +- libcurl-http/headerapi.md | 6 +++--- libcurl.md | 2 +- libcurl/api.md | 6 +++--- libcurl/callbacks.md | 8 ++++---- libcurl/conn/how.md | 2 +- libcurl/conn/names.md | 14 +++++++------- libcurl/conn/reuse.md | 4 ++-- libcurl/cplusplus.md | 2 +- libcurl/drive/multi-socket.md | 4 ++-- libcurl/verbose.md | 6 +++--- libcurl/ws.md | 4 ++-- libcurl/ws/options.md | 4 ++-- libcurl/ws/read.md | 2 +- project.md | 10 +++++----- project/bugs.md | 6 +++--- project/etiquette.md | 8 ++++---- project/name.md | 19 +++++++++---------- project/releases.md | 8 ++++---- protocols/network.md | 26 +++++++++++++------------- protocols/protocols.md | 8 ++++---- source/reportvuln.md | 4 ++-- usingcurl/mqtt.md | 8 ++++---- usingcurl/persist.md | 2 +- usingcurl/proxies/captive.md | 6 +++--- usingcurl/proxies/http.md | 20 ++++++++++---------- usingcurl/tls/enable.md | 7 +++---- usingcurl/tls/pinning.md | 2 +- usingcurl/tls/verify.md | 4 ++-- usingcurl/tls/versions.md | 4 ++-- usingcurl/uploads.md | 12 ++++++------ usingcurl/version.md | 9 ++++----- 60 files changed, 207 insertions(+), 209 deletions(-) diff --git a/build/autotools.md b/build/autotools.md index 5b398ac002..2d28fdfc54 100644 --- a/build/autotools.md +++ b/build/autotools.md @@ -1,6 +1,6 @@ # Autotools -The "Autotools" are a collection of different tools that used together generate +The Autotools are a collection of different tools that used together generate the `configure` script. The configure script is run by the user who wants to build curl and it does a whole bunch of things: @@ -15,7 +15,7 @@ build curl and it does a whole bunch of things: built to use. - It specifies on which file path the generated installation should be placed - when ultimately the build is made and "make install" is invoked. + when ultimately the build is made and `make install` is invoked. In the most basic usage, just running `./configure` in the source directory is enough. When the script completes, it outputs a summary of what options it has @@ -42,7 +42,7 @@ setup for the particular target system for which you want to build. How to get and install that system is not covered in this book. Once you have a cross compiler, you can instruct configure to use that -compiler instead of the "native" compiler when it builds curl so that the end +compiler instead of the native compiler when it builds curl so that the end result then can be moved over and used on the other machine. ## Static linking diff --git a/build/tls.md b/build/tls.md index 3b5f078c6e..146fdbb936 100644 --- a/build/tls.md +++ b/build/tls.md @@ -102,7 +102,7 @@ point configure to a custom install path prefix where it can find BearSSL: ./configure --with-rustls -When told to use "rustls", curl is actually trying to find and use the +When told to use rustls, curl is actually trying to find and use the rustls-ffi library - the C API for the rustls library. configure detects rustls-ffi in its default path by default. You can optionally point configure to a custom install path prefix where it can find rustls-ffi: diff --git a/cmdline/urls/auth.md b/cmdline/urls/auth.md index c8a63835f8..0a61d68e20 100644 --- a/cmdline/urls/auth.md +++ b/cmdline/urls/auth.md @@ -12,5 +12,5 @@ also allows that information to be provide with normal command-line options, outside of the URL. If you want a non-ASCII letter or maybe a `:` or `@` as part of the user name -and/or password, remember to "URL-encode" that letter: write it as `%HH` where +and/or password, remember to URL encode that letter: write it as `%HH` where `HH` is the hexadecimal byte value. `:` is `%3a` and `@` is `%40`. diff --git a/cmdline/urls/fragment.md b/cmdline/urls/fragment.md index c883011e4d..fce0f506e1 100644 --- a/cmdline/urls/fragment.md +++ b/cmdline/urls/fragment.md @@ -1,6 +1,6 @@ # Fragment -URLs offer a fragment part. That is usually seen as a hash symbol (#) and a +URLs offer a fragment part. That is usually seen as a hash symbol (`#`) and a name for a specific name within a webpage in browsers. An example of such a URL might look like: @@ -15,7 +15,7 @@ the fragment, make sure to pass it URL-encoded, as `%23`: curl https://www.example.com/info.html%23the-plot -## The "fragment trick" +## A fragment trick The fact that the fragment part is not actually used over the network can be taken advantage of when you craft command lines. diff --git a/ftp.md b/ftp.md index ae123ddc11..5cbfc2b058 100644 --- a/ftp.md +++ b/ftp.md @@ -26,7 +26,7 @@ actual data transfer. ## Transfer mode When an FTP client is about to transfer data, it specifies to the server which -"transfer mode" it would like the upcoming transfer to use. The two transfer +transfer mode it would like the upcoming transfer to use. The two transfer modes curl supports are 'ASCII' and 'BINARY'. Ascii is for text and usually means that the server sends the files with converted newlines while binary means sending the data unaltered and assuming the file is not text. @@ -37,10 +37,10 @@ instead with `-B, --use-ascii` or by making sure the URL ends with `;type=A`. ## Authentication FTP is one of the protocols you normally do not access without a user name and -password. It just happens that for systems that allow "anonymous" FTP access -you can login with pretty much any name and password you like. When curl is -used on an FTP URL to do transfer without any given user name or password, it -uses the name `anonymous` with the password `ftp@example.com`. +password. It just happens that for systems that allow anonymous FTP access you +can login with pretty much any name and password you like. When curl is used +on an FTP URL to do transfer without any given user name or password, it uses +the name `anonymous` with the password `ftp@example.com`. If you want to provide another user name and password, you can pass them on to curl either with the `-u, --user` option or embed the info in the URL: diff --git a/ftp/cmds.md b/ftp/cmds.md index ad369119c9..948c77e641 100644 --- a/ftp/cmds.md +++ b/ftp/cmds.md @@ -29,10 +29,9 @@ the FTP command with a dash: curl -Q -NOOP ftp://example.com/file -The third "position in time" that curl offers to send the commands, is after -curl has changed the working directory, just **before the commands** that kick -off the transfer are sent. To send command then, prefix the command with a '+' -(plus). +curl also offers to send commands after it changes the working directory, just +**before the commands** that kick off the transfer are sent. To send command +then, prefix the command with a '+' (plus). ## A series of commands @@ -53,7 +52,7 @@ Example, rename a file then do a transfer: You can opt to send individual quote commands that are allowed to fail, to get an error returned from the server without causing everything to stop. -You make the command "fallible" by prefixing it with an asterisk (`*`). For +You make the command fallible by prefixing it with an asterisk (`*`). For example, send a delete (`DELE`) after a transfer and allow it to fail: curl -Q "-*DELE file" ftp://example.com/moo diff --git a/ftp/twoconnections.md b/ftp/twoconnections.md index bcd296c7bf..e8128c827c 100644 --- a/ftp/twoconnections.md +++ b/ftp/twoconnections.md @@ -13,7 +13,7 @@ several reasons. ## Active connections The client can opt to ask the server to connect to the client to set it up, a -so-called "active" connection. This is done with the PORT or EPRT +so-called *active* connection. This is done with the PORT or EPRT commands. Allowing a remote host to connect back to a client on a port that the client opens up requires that there is no firewall or other network appliance in between that refuses that to go through and that is far from @@ -30,10 +30,11 @@ command than PORT) with the `--no-eprt` command-line option. ## Passive connections -Curl defaults to asking for a "passive" connection, which means it sends a +Curl defaults to asking for a *passive* connection, which means it sends a PASV or EPSV command to the server and then the server opens up a new port for the second connection that then curl connects to. Outgoing connections to a -new port are generally easier and less restricted for end users and clients, but it then requires that the network in the server's end allows it. +new port are generally easier and less restricted for end users and clients +but requires that the network in the server's end allows it. Passive connections are enabled by default, but if you have switched on active before, you can switch back to passive with `--ftp-pasv`. diff --git a/get/linux.md b/get/linux.md index 3ae5c283dc..eced1e6a43 100644 --- a/get/linux.md +++ b/get/linux.md @@ -1,8 +1,8 @@ # Linux -Linux distributions come with "packager managers" that let you install -software that they offer. Most Linux distributions offer curl and libcurl to -be installed if they are not installed by default. +Linux distributions come with packager managers that let you install software +that they offer. Most Linux distributions offer curl and libcurl to be +installed if they are not installed by default. ## Ubuntu and Debian diff --git a/get/win-vcpkg.md b/get/win-vcpkg.md index f807cf32b3..32e38d1b20 100644 --- a/get/win-vcpkg.md +++ b/get/win-vcpkg.md @@ -3,7 +3,7 @@ [Vcpkg](https://github.com/microsoft/vcpkg/) helps you manage C and C++ libraries on Windows, Linux and MacOS. -There is no "curl" package on vcpkg, only libcurl. +There is no curl package on vcpkg, only libcurl. ## Install libcurl diff --git a/http/altsvc.md b/http/altsvc.md index 0909fe8813..9a9afde583 100644 --- a/http/altsvc.md +++ b/http/altsvc.md @@ -2,7 +2,7 @@ [RFC 7838](https://www.rfc-editor.org/rfc/rfc7838.txt) defines an HTTP header which lets a server tell a client that there is one or more *alternatives* for -that server at "another place" with the use of the `Alt-Svc:` response header. +that server at another place with the use of the `Alt-Svc:` response header. The *alternatives* the server suggests can include a server running on another port on the same host, on another completely different hostname and it can diff --git a/http/auth.md b/http/auth.md index 4d155871f9..197be12b67 100644 --- a/http/auth.md +++ b/http/auth.md @@ -24,7 +24,7 @@ option to provide user name and password (separated with a colon). Like this: curl --user daniel:secret http://example.com/ -This makes curl use the default "Basic" HTTP authentication method. Yes, it is +This makes curl use the default *Basic* HTTP authentication method. Yes, it is actually called Basic and it is truly basic. To explicitly ask for the basic method, use `--basic`. diff --git a/http/conditionals.md b/http/conditionals.md index 37ae1595d6..90f3ccaaf4 100644 --- a/http/conditionals.md +++ b/http/conditionals.md @@ -2,7 +2,7 @@ Sometimes users want to avoid downloading a file again if the same file maybe already has been downloaded the day before. This can be done by making the -HTTP transfer "conditioned" on something. curl supports two different +HTTP transfer conditioned on something. curl supports two different conditions: the file timestamp and etag. ## Check by modification date @@ -36,7 +36,7 @@ remote file had: ## Check by modification of content -HTTP servers can return a specific "ETag" for a given resource version. If the +HTTP servers can return a specific *ETag* for a given resource version. If the resource at a given URL changes, a new Etag value must be generated, so a client knows that as long as the ETag remains the same, the content has not changed. @@ -46,8 +46,8 @@ or file dates. It also then makes the check able to detect sub-second changes, which the timestamp based checks cannot. Using curl you can download a remote file and save its ETag (if it provides -any) in a separate "cache" by using the `--etag-save` command line -option. Like this: +any) in a separate cache by using the `--etag-save` command line option. Like +this: curl --etag-save etags.txt https://example.com/file -o output diff --git a/http/cookies.md b/http/cookies.md index 16313c686c..b11361b1b5 100644 --- a/http/cookies.md +++ b/http/cookies.md @@ -13,16 +13,17 @@ important for how long the cookie should live on. The expiry of a cookie is either set to a fixed time in the future (or to live a number of seconds) or it gets no expiry at all. A cookie without an expire -time is called a "session cookie" and is meant to live during the "session" but not longer. A session in this aspect is typically thought to be the life -time of the browser used to view a site. When you close the browser, you end -your session. Doing HTTP operations with a command-line client that supports +time is called a session cookie and is meant to live during the *session* but +not longer. A session in this aspect is typically thought to be the life time +of the browser used to view a site. When you close the browser, you end your +session. Doing HTTP operations with a command-line client that supports cookies begs the question of when a session really ends… ## Cookie engine The general concept of curl only doing the bare minimum unless you tell it differently makes it not acknowledge cookies by default. You need to switch on -"the cookie engine" to make curl keep track of cookies it receives and then +the cookie engine to make curl keep track of cookies it receives and then subsequently send them out on requests that have matching cookies. You enable the cookie engine by asking curl to read or write cookies. If you @@ -60,8 +61,8 @@ of the same input file would use the original cookie contents again. ## Writing cookies to file -The place where cookies are stored is sometimes referred to as the "cookie -jar". When you enable the cookie engine in curl and it has received cookies, +The place where cookies are stored is sometimes referred to as the cookie +jar. When you enable the cookie engine in curl and it has received cookies, you can instruct curl to write down all its known cookies to a file, the cookie jar, before it exits. It is important to remember that curl only updates the output cookie jar on exit and not during its lifetime, no matter diff --git a/http/https.md b/http/https.md index 7f3cfd580a..06d18641af 100644 --- a/http/https.md +++ b/http/https.md @@ -1,6 +1,6 @@ # HTTPS -HTTPS is in effect Secure HTTP. The "secure" part means that the TCP transport +HTTPS is in effect Secure HTTP. The secure part means that the TCP transport layer is enhanced to provide authentication, privacy (encryption) and data integrity by the use of TLS. diff --git a/http/modify/fragment.md b/http/modify/fragment.md index 2592a542db..7c1cd64eea 100644 --- a/http/modify/fragment.md +++ b/http/modify/fragment.md @@ -1,7 +1,7 @@ # Fragment -A URL may contain an "anchor", also known as a fragment, which is written with -a pound sign and string at the end of the URL. Like for example +A URL may contain an anchor, also known as a fragment, which is written with a +pound sign and string at the end of the URL. Like for example `http://example.com/foo.html#here-it-is`. That fragment part, everything from the pound/hash sign to the end of the URL, is only intended for local use and is not sent over the network. curl simply strips that data off and discards diff --git a/http/multipart.md b/http/multipart.md index 0d7953d78b..f57a3b303b 100644 --- a/http/multipart.md +++ b/http/multipart.md @@ -1,8 +1,8 @@ # Multipart formposts A multipart formpost is what an HTTP client sends when an HTML form is -submitted with *enctype* set to "multipart/form-data". It is an HTTP POST -request sent with the request body specially formatted as a series of "parts", +submitted with *enctype* set to `multipart/form-data`. It is an HTTP POST +request sent with the request body specially formatted as a series of parts, separated with MIME boundaries. An example piece of HTML would look like this: @@ -56,7 +56,7 @@ The **Expect** header is explained in the [Expect 100 continue](post/expect100.m chapter. The **Content-Type** header is a bit special. It tells that this is a -multipart formpost and then it sets the "boundary" string. The boundary string +multipart formpost and then it sets the boundary string. The boundary string is a line of characters with a bunch of random digits somewhere in it, that serves as a separator between the different parts of the form that is submitted. The particular boundary you see in this example has the random part @@ -108,7 +108,7 @@ to submit a multipart form as seen in HTML. instance. Submit the form and watch how `nc` shows it. Then translate into a curl command line. -2. Use the "development tools" in your favorite browser and inspect the POST +2. Use the development tools in your favorite browser and inspect the POST request in the network tab after you have submitted it. Then convert that HTTP data to a curl command line. Unfortunately, the [copy as curl](../usingcurl/copyas.md) feature in the browsers usually do @@ -129,20 +129,20 @@ An example action looks like this: If the form is found in a webpage hosted on a URL like for example `https://example.com/user/login` the `action=submit.cgi` is a relative path -within the same "directory" as the form itself. The full URL to submit this -form thus becomes `https://example.com/user/submit.cgi`. That is the URL to -use in the curl command line. +within the same directory as the form itself. The full URL to submit this form +thus becomes `https://example.com/user/submit.cgi`. That is the URL to use in +the curl command line. Next, you must identify every `` tag used within the form, including -the ones that are marked as "hidden". Hidden just means that they are not -shown in the webpage, but they should still be sent in the POST. +the ones that are marked as hidden. Hidden just means that they are not shown +in the webpage, but they should still be sent in the POST. For every `` in the form there should be a corresponding `-F` in the command line. ### text input -A regular tag using type "text" in the style like +A regular tag using type text in the style like @@ -152,7 +152,7 @@ should then set the field name with content like this: ### file input -When the input type is set to a "file", like in: +When the input type is set to a file, like in: diff --git a/http/postvspost.md b/http/postvspost.md index 237c34bee8..9f5b4f6ac1 100644 --- a/http/postvspost.md +++ b/http/postvspost.md @@ -25,9 +25,9 @@ type=file>` tag, for file uploads. The default `enctype` used by forms, which is rarely spelled out in HTML since it is default, is `application/x-www-form-urlencoded`. It makes the browser -"URL encode" the input as name=value pairs with the data encoded to avoid -unsafe characters. We often refer to that as a [regular POST](post.md), -and you perform one with curl's `-d` and friends. +URL encode the input as name=value pairs with the data encoded to avoid unsafe +characters. We often refer to that as a [regular POST](post.md), and you +perform one with curl's `-d` and friends. ## POST outside of HTML @@ -35,7 +35,7 @@ POST is a regular HTTP method and there is no requirement that it be triggered by HTML or involve a browser. Lots of services, APIs and other systems allow you to pass in data these days in order to get things done. -If these services expect plain "raw" data or perhaps data formatted as JSON or +If these services expect plain raw data or perhaps data formatted as JSON or similar, you want the [regular POST](post.md) approach. curl's `-d` option does not alter or encode the data at all but just sends exactly what you tell it to. Just pay attention that `-d` sets a default `Content-Type:` that might diff --git a/http/response.md b/http/response.md index 2f88637ecd..63053de151 100644 --- a/http/response.md +++ b/http/response.md @@ -29,7 +29,7 @@ response code would indicate that the requested document could not be delivered (or similar). curl considers a successful sending and receiving of HTTP to be good. -The first digit of the HTTP response code is a kind of "error class": +The first digit of the HTTP response code is a kind of error class: - 1xx: transient response, more is coming - 2xx: success @@ -55,13 +55,13 @@ numeric range and you can use `--write-out` to extract that code as well. ## Chunked transfer encoding -An HTTP 1.1 server can decide to respond with a "chunked" encoded response, a +An HTTP 1.1 server can decide to respond with a chunked encoded response, a feature that was not present in HTTP 1.0. When receiving a chunked response, there is no Content-Length: for the response to indicate its size. Instead, there is a `Transfer-Encoding: chunked` header that tells curl there is chunked data coming and then in the -response body, the data comes in a series of "chunks". Every individual chunk +response body, the data comes in a series of chunks. Every individual chunk starts with the size of that particular chunk (in hexadecimal), then a newline and then the contents of the chunk. This is repeated over and over until the end of the response, which is signaled with a zero sized chunk. The point of diff --git a/http/versions/http2.md b/http/versions/http2.md index 78bb265ed4..953eba2d70 100644 --- a/http/versions/http2.md +++ b/http/versions/http2.md @@ -20,9 +20,9 @@ To ask a server to use HTTP/2, just: If your curl does not support HTTP/2, that command line tool returns an error saying so. Running `curl -V` shows if your version of curl supports it. -If you by some chance already know that your server speaks HTTP/2 (for example, -within your own controlled environment where you know exactly what runs in -your machines) you can shortcut the HTTP/2 "negotiation" with +If you by some chance already know that your server speaks HTTP/2 (for +example, within your own controlled environment where you know exactly what +runs in your machines) you can shortcut the HTTP/2 negotiation with `--http2-prior-knowledge`. ## Multiplexing diff --git a/internals/backends.md b/internals/backends.md index 7765863c64..5186855cb7 100644 --- a/internals/backends.md +++ b/internals/backends.md @@ -33,10 +33,10 @@ functionality. In these different areas there are multiple different providers: Applications (in the upper yellow cloud) access libcurl through the public API. The API is fixed and stable. -Internally, the "core" of libcurl uses internal APIs to perform the different +Internally, the core of libcurl uses internal APIs to perform the different duties it needs to do. Each of these internal APIs are powered by alternative implementations, in many times powered by different third party libraries. The image above shows the different third party libraries powering different -internal APIs. The purple boxes are "one or more" and the dark gray ones are +internal APIs. The purple boxes are one or more and the dark gray ones are "one of these". diff --git a/internals/caches.md b/internals/caches.md index 6da13babec..a514b34d5c 100644 --- a/internals/caches.md +++ b/internals/caches.md @@ -15,7 +15,7 @@ slow) resolve operation again. This cache exists in memory only. ## connection cache -Also known as the connection pool. This is where curl puts "live connections" +Also known as the connection pool. This is where curl puts live connections after a transfer is complete so that a subsequent transfer might be able to use an already existing connection instead of having to set a new one up. When a connection is reused, curl avoids name lookups, TLS handshakes and more. diff --git a/internals/handler.md b/internals/handler.md index 1b1ae29936..f4862757f3 100644 --- a/internals/handler.md +++ b/internals/handler.md @@ -8,7 +8,7 @@ use of all states for all transfers. However, each different protocol libcurl speaks also has its unique particularities and specialties. In order to not have the code littered with -conditions in the style "if the protocol is XYZ, then do…", we instead have +conditions in the style if the protocol is XYZ, then do…, we instead have the concept of `Curl_handler`. Each supported protocol defines one of those in `lib/url.c` there is an array of pointers to such handlers called `protocols[]`. @@ -28,9 +28,9 @@ protocol to work for a transfer. Things that not all other protocols need. The handler struct also sets up the name of the protocol and describes its feature set with a bitmask. -A libcurl transfer is built around a set of different "actions" and the -handler can extend each of them. Here are some example function pointers in -this struct and how they are used: +A libcurl transfer is built around a set of different actions and the handler +can extend each of them. Here are some example function pointers in this +struct and how they are used: ## Setup connection @@ -50,7 +50,7 @@ After a connection has been established, this function gets called ## Do -"Do" is simply the action that issues a request for the particular resource +*Do* is simply the action that issues a request for the particular resource the URL identifies. All protocol has a do action so this function must be provided: @@ -58,7 +58,7 @@ provided: ## Done -When a transfer is completed, the "done" action is taken: +When a transfer is completed, the *done* action is taken: result = conn->handler->done(data, status, premature); diff --git a/internals/memory-debugging.md b/internals/memory-debugging.md index b9f466a2e3..889fabad36 100644 --- a/internals/memory-debugging.md +++ b/internals/memory-debugging.md @@ -2,8 +2,8 @@ The file `lib/memdebug.c` contains debug-versions of a few functions. Functions such as `malloc()`, `free()`, `fopen()`, `fclose()`, etc that -somehow deal with resources that might give us problems if we "leak" them. -The functions in the memdebug system do nothing fancy, they do their normal +somehow deal with resources that might give us problems if we leak them. The +functions in the memdebug system do nothing fancy, they do their normal function and then log information about what they just did. The logged data can then be analyzed after a complete session, diff --git a/internals/multi.md b/internals/multi.md index 54aa9bf1eb..e424d6d604 100644 --- a/internals/multi.md +++ b/internals/multi.md @@ -3,14 +3,13 @@ libcurl offers a few different APIs to do transfers; where the primary differences are the synchronous easy interface versus the non-blocking multi interface. The multi interface itself can then be further used either by using -the event-driven socket interface or the "normal" perform interface. +the event-driven socket interface or the normal perform interface. Internally however, everything is written for the event-driven interface. Everything needs to be written in non-blocking fashion so that functions are -never waiting for data in loop or similar. Unless they are the "surface" +never waiting for data in loop or similar. Unless they are the surface functions that have that expressed functionality. The function `curl_easy_perform()` which performs a single transfer synchronously, is itself just a wrapper function that internally setups and uses the multi interface itself. - diff --git a/internals/structs.md b/internals/structs.md index e1cde5967b..05fe0c38f5 100644 --- a/internals/structs.md +++ b/internals/structs.md @@ -40,9 +40,9 @@ times. ## connectdata A general idea in libcurl is to keep connections around in a connection - "cache" after they have been used in case they are used again and then - re-use an existing one instead of creating a new one as it creates a - significant performance boost. + cache after they have been used in case they are used again and then re-use + an existing one instead of creating a new one as it creates a significant + performance boost. Each `connectdata` struct identifies a single physical connection to a server. If the connection cannot be kept alive, the connection is closed @@ -120,7 +120,7 @@ times. The concrete function pointer prototypes can be found in `lib/urldata.h`. - `->scheme` is the URL scheme name, usually spelled out in uppercase. That - is "HTTP" or "FTP" etc. SSL versions of the protocol need their own + is HTTP or FTP etc. SSL versions of the protocol need their own `Curl_handler` setup so HTTPS separate from HTTP. - `->setup_connection` is called to allow the protocol code to allocate @@ -172,8 +172,8 @@ times. - `->defport` is the default report TCP or UDP port this protocol uses - `->protocol` is one or more bits in the `CURLPROTO_*` set. The SSL - versions have their "base" protocol set and then the SSL variation. Like - "HTTP|HTTPS". + versions have their base protocol set and then the SSL variation. Like + `HTTP|HTTPS`. - `->flags` is a bitmask with additional information about the protocol that makes it get treated differently by the generic engine: @@ -182,8 +182,8 @@ times. - `PROTOPT_CLOSEACTION` - this protocol has actions to do before closing the connection. This flag is no longer used by code, yet still set for a bunch of protocol handlers. - - `PROTOPT_DIRLOCK` - "direction lock". The SSH protocols set this bit to - limit which "direction" of socket actions that the main engine concerns + - `PROTOPT_DIRLOCK` - direction lock. The SSH protocols set this bit to + limit which direction of socket actions that the main engine concerns itself with. - `PROTOPT_NONETWORK` - a protocol that does not use the network (read `file:`) diff --git a/internals/tests/ci.md b/internals/tests/ci.md index bad9e43b7c..f29a3870ad 100644 --- a/internals/tests/ci.md +++ b/internals/tests/ci.md @@ -25,4 +25,4 @@ or two CI jobs that seemingly are stuck "permafailing", that seems to be failing the jobs on a permanent basis. We work hard to make them not, but it is a tough job and we often see red -builds even for changes that should otherwise be "all green". +builds even for changes that should otherwise be all green. diff --git a/libcurl-http/alt-svc.md b/libcurl-http/alt-svc.md index a226f8ea47..b3f2b02ea4 100644 --- a/libcurl-http/alt-svc.md +++ b/libcurl-http/alt-svc.md @@ -2,7 +2,7 @@ Alternative Services, aka alt-svc, is an HTTP header that lets a server tell the client that there is one or more *alternatives* for that server at -"another place" with the use of the `Alt-Svc:` response header. +another place with the use of the `Alt-Svc:` response header. The *alternatives* the server suggests can include a server running on another port on the same host, on another completely different hostname and it can @@ -13,7 +13,7 @@ also offer the service *over another protocol*. To make libcurl consider any offered alternatives by serves, you must first enable it in the handle. You do this by setting the correct bitmask to the `CURLOPT_ALTSVC_CTRL` option. The bitmask allows the application to limit what -HTTP versions to allow, and if the "cache" file on disk should only be used to +HTTP versions to allow, and if the cache file on disk should only be used to read from (not write). Enable alt-svc and allow it to switch to either HTTP/1 or HTTP/2: diff --git a/libcurl-http/cookies.md b/libcurl-http/cookies.md index 0e70496105..99b0b6787d 100644 --- a/libcurl-http/cookies.md +++ b/libcurl-http/cookies.md @@ -2,7 +2,7 @@ By default and by design, libcurl makes transfers as basic as possible and features need to be enabled to get used. One such feature is HTTP cookies, -more known as just plain and simply "cookies". +more known as just plain and simply cookies. Cookies are name/value pairs sent by the server (using a `Set-Cookie:` header) to be stored in the client, and are then supposed to get sent back again in @@ -13,10 +13,10 @@ of cookies. ## Cookie engine -When you enable the "cookie engine" for a specific easy handle, it means that -it records incoming cookies, stores them in the in-memory "cookie store" that -is associated with the easy handle and subsequently sends the proper ones back -if an HTTP request is made that matches. +When you enable the cookie engine for a specific easy handle, it means that it +records incoming cookies, stores them in the in-memory cookie store that is +associated with the easy handle and subsequently sends the proper ones back if +an HTTP request is made that matches. There are two ways to switch on the cookie engine: @@ -27,8 +27,9 @@ the `CURLOPT_COOKIEFILE` option: curl_easy_setopt(easy, CURLOPT_COOKIEFILE, "cookies.txt"); -A common trick is to just specify a non-existing filename or plain "" to have -it just activate the cookie engine with a blank cookie store to start with. +A common trick is to just specify a non-existing filename or plain `""` to +have it just activate the cookie engine with a blank cookie store to start +with. This option can be set multiple times and then each of the given files are read. @@ -42,7 +43,7 @@ option: when the easy handle is closed later with `curl_easy_cleanup()`, all known cookies are stored in the given file. The file format is the well-known -"Netscape cookie file" format that browsers also once used. +Netscape cookie file format that browsers also once used. ## Setting custom cookies diff --git a/libcurl-http/download.md b/libcurl-http/download.md index 41473935c1..ba20bd7aa7 100644 --- a/libcurl-http/download.md +++ b/libcurl-http/download.md @@ -22,7 +22,7 @@ metadata associated with the actual payload, called the response body. All downloads get a set of headers too, but when using libcurl you can select whether you want to have them downloaded (seen) or not. -You can ask libcurl to pass on the headers to the same "stream" as the regular +You can ask libcurl to pass on the headers to the same stream as the regular body is, by using `CURLOPT_HEADER`: easy = curl_easy_init(); diff --git a/libcurl-http/headerapi.md b/libcurl-http/headerapi.md index d0f7ccede3..d828d6e7af 100644 --- a/libcurl-http/headerapi.md +++ b/libcurl-http/headerapi.md @@ -37,10 +37,10 @@ request in the series, independently of the actual amount of requests used. ## Header folding -HTTP/1 headers supports a deprecated format called "folding", which means that -there is a continuation line after a header, making the line "folded". +HTTP/1 headers supports a deprecated format called *folding*, which means that +there is a continuation line after a header, making the line folded. -The headers API supports folded headers and returns such contents "unfolded" - +The headers API supports folded headers and returns such contents unfolded - where the different parts are separated by a single whitespace character. ## When diff --git a/libcurl.md b/libcurl.md index 9103be6cdf..dc3d53bb40 100644 --- a/libcurl.md +++ b/libcurl.md @@ -9,7 +9,7 @@ performing their Internet data transfers. libcurl is a library of functions that are provided with a C API, for applications written in C. You can easily use it from C++ too, with only a few considerations (see [libcurl for C++ programmers](libcurl/cplusplus.md)). For -other languages, there exist "bindings" that work as intermediate layers +other languages, there exist *bindings* that work as intermediate layers between libcurl the library and corresponding functions for the particular language you like. diff --git a/libcurl/api.md b/libcurl/api.md index d2d94b9b2c..bbe7af7483 100644 --- a/libcurl/api.md +++ b/libcurl/api.md @@ -57,8 +57,8 @@ defined as: Where `XX` , `YY` and `ZZ` are the main version, release and patch numbers in hexadecimal. All three number fields are always represented using two digits -(eight bits each). 1.2.0 would appear as "0x010200" while version 9.11.7 -appears as "0x090b07". +(eight bits each). 1.2.0 would appear as `0x010200` while version 9.11.7 +appears as `0x090b07`. This 6-digit hexadecimal number is always a greater number in a more recent release. It makes comparisons with greater than and less than work. @@ -82,7 +82,7 @@ changed independent of applications. curl_version_info() returns a pointer to a struct with information about version numbers and various features and in the running version of libcurl. You call it by giving it a special age counter so that libcurl knows -the "age" of the libcurl that calls it. The age is a define called +the age of the libcurl that calls it. The age is a define called `CURLVERSION_NOW` and is a counter that is increased at irregular intervals throughout the curl development. The age number tells libcurl what struct set it can return. diff --git a/libcurl/callbacks.md b/libcurl/callbacks.md index 3dc95f6374..b8bc6ff416 100644 --- a/libcurl/callbacks.md +++ b/libcurl/callbacks.md @@ -9,10 +9,10 @@ write it with the exact function prototype to accept the correct arguments and return the documented return code and return value so that libcurl performs the way you want it to. -Each callback option also has a companion option that sets the associated -"user pointer". This user pointer is a pointer that libcurl does not touch or -care about, but just passes on as an argument to the callback. This allows you -to, for example, pass in pointers to local data all the way through to your +Each callback option also has a companion option that sets the associated user +pointer. This user pointer is a pointer that libcurl does not touch or care +about, but just passes on as an argument to the callback. This allows you to, +for example, pass in pointers to local data all the way through to your callback function. Unless explicitly stated in a libcurl function documentation, it is not diff --git a/libcurl/conn/how.md b/libcurl/conn/how.md index 02cc21e761..a1b9b90f2c 100644 --- a/libcurl/conn/how.md +++ b/libcurl/conn/how.md @@ -37,7 +37,7 @@ time. When libcurl has multiple addresses left to try to connect to, and there is more than 600 millisecond left, it will at most allow half the remaining time -for this attempt. This is to avoid a single "sink-hole address" make libcurl +for this attempt. This is to avoid a single sink-hole address make libcurl spend its entire timeout on that bad entry. For example: if there are 1000 milliseconds left of the timeout and there are diff --git a/libcurl/conn/names.md b/libcurl/conn/names.md index c8f85120fe..d5396467b6 100644 --- a/libcurl/conn/names.md +++ b/libcurl/conn/names.md @@ -1,7 +1,7 @@ # Name resolving Most transfers libcurl can do involves a name that first needs to be -translated to an Internet address. That is "name resolving". Using a numerical +translated to an Internet address. That is name resolving. Using a numerical IP address directly in the URL usually avoids the name resolve phase, but in many cases it is not easy to manually replace the name with the IP address. @@ -28,8 +28,8 @@ libcurl can be built to do name resolves in one out of these three different ways and depending on which backend way that is used, it gets a slightly different feature set and sometimes modified behavior. -1. The default backend is invoking the "normal" libc resolver functions in a -new helper-thread, so that it can still do fine-grained timeouts if wanted and +1. The default backend is invoking the normal libc resolver functions in a new +helper-thread, so that it can still do fine-grained timeouts if wanted and there is no blocking calls involved. 2. On older systems, libcurl uses the standard synchronous name resolver @@ -70,9 +70,9 @@ made shared between multiple easy handles using the [share interface](../sharing ## Custom addresses for hosts -Sometimes it is handy to provide "fake" addresses to real host names so that -libcurl connects to a different address instead of one an actual name resolve -would suggest. +Sometimes it is handy to provide fake, custom addresses for real host names so +that libcurl connects to a different address instead of one an actual name +resolve would suggest. With the help of the [CURLOPT_RESOLVE](https://curl.se/libcurl/c/CURLOPT_RESOLVE.html) option, @@ -86,7 +86,7 @@ requested, an application can do: dns = curl_slist_append(NULL, "example.com:443:127.0.0.1"); curl_easy_setopt(curl, CURLOPT_RESOLVE, dns); -Since this puts the "fake" address into the DNS cache, it works even when +Since this puts the fake address into the DNS cache, it works even when following redirects etc. ## Name server options diff --git a/libcurl/conn/reuse.md b/libcurl/conn/reuse.md index ecef63fa58..bb2edd8cfe 100644 --- a/libcurl/conn/reuse.md +++ b/libcurl/conn/reuse.md @@ -1,7 +1,7 @@ # Connection reuse libcurl keeps a pool of old connections alive. When one transfer has completed -it keeps N connections alive in a "connection pool" (sometimes also called +it keeps N connections alive in a connection pool (sometimes also called connection cache) so that a subsequent transfer that happens to be able to reuse one of the existing connections can use it instead of creating a new one. Reusing a connection instead of creating a new one offers significant @@ -29,7 +29,7 @@ easy handles freely without risking losing the connection pool, and it allows the connection used by one easy handle to get reused by a separate one in a later transfer. Just reuse the multi handle. -## Sharing the "connection cache" +## Sharing the connection cache Since libcurl 7.57.0, applications can use the [share interface](../sharing.md) to have otherwise independent transfers share the same connection pool. diff --git a/libcurl/cplusplus.md b/libcurl/cplusplus.md index 879186d9eb..cd2d789d1c 100644 --- a/libcurl/cplusplus.md +++ b/libcurl/cplusplus.md @@ -17,7 +17,7 @@ a URL: ## Callback considerations Since libcurl is a C library, it does not know anything about C++ member -functions or objects. You can overcome this "limitation" with relative ease +functions or objects. You can overcome this limitation with relative ease using for a static member function that is passed a pointer to the class. Here's an example of a write callback using a C++ method as callback: diff --git a/libcurl/drive/multi-socket.md b/libcurl/drive/multi-socket.md index 3414aa883b..9745ec6570 100644 --- a/libcurl/drive/multi-socket.md +++ b/libcurl/drive/multi-socket.md @@ -10,13 +10,13 @@ transfers in a single application. It is usually the API that makes the most sense if you do a large number (>100 or so) of parallel transfers. Event-driven in this case means that your application uses a system level -library or setup that "subscribes" to a number of sockets and it lets your +library or setup that subscribes to a number of sockets and it lets your application know when one of those sockets are readable or writable and it tells you exactly which one. This setup allows clients to scale up the number of simultaneous transfers much higher than with other systems, and still maintain good performance. The -"regular" APIs otherwise waste far too much time scanning through lists of all +regular APIs otherwise waste far too much time scanning through lists of all the sockets. ## Pick one diff --git a/libcurl/verbose.md b/libcurl/verbose.md index 4391e7545f..dd52c1dbfb 100644 --- a/libcurl/verbose.md +++ b/libcurl/verbose.md @@ -7,7 +7,7 @@ moment. The next lifesaver when writing libcurl applications that everyone needs to know about and needs to use extensively, at least while developing libcurl -applications or debugging libcurl itself, is to enable "verbose mode" with +applications or debugging libcurl itself, is to enable verbose mode with `CURLOPT_VERBOSE`: CURLcode ret = curl_easy_setopt(handle, CURLOPT_VERBOSE, 1L); @@ -42,7 +42,7 @@ The trace callback should match a prototype like this: **handle** is the easy handle it concerns, **type** describes the particular data passed to the callback (data in/out, header in/out, TLS data in/out and -"text"), **data** is a pointer pointing to the data being **size** number of +text), **data** is a pointer pointing to the data being **size** number of bytes. **user** is the custom pointer you set with `CURLOPT_DEBUGDATA`. The data pointed to by **data** is *not* null terminated, but is exactly of @@ -66,7 +66,7 @@ separate connections and different transfers, there are times when you want to see to which specific transfers or connections the various information belong to. To better understand the trace output. -You can then get the transfer and connection "identifiers" from within the +You can then get the transfer and connection identifiers from within the callback: curl_off_t conn_id; diff --git a/libcurl/ws.md b/libcurl/ws.md index 366e7d6478..b33f54b471 100644 --- a/libcurl/ws.md +++ b/libcurl/ws.md @@ -2,8 +2,8 @@ WebSocket is a transfer protocol done *on top* of HTTP that offers a general purpose bidirectional byte-stream. The protocol was created for more than just -plain uploads and downloads and is more similar to something like "TCP over -HTTP". +plain uploads and downloads and is more similar to something like TCP over +HTTP. A WebSocket client application sets up a connection with an HTTP request that *upgrades* into WebSocket - and once upgraded, the involved parties speak diff --git a/libcurl/ws/options.md b/libcurl/ws/options.md index 7970fb15e3..5f281057bd 100644 --- a/libcurl/ws/options.md +++ b/libcurl/ws/options.md @@ -3,13 +3,13 @@ There is a dedicated setopt option for the application to control a WebSocket communication: `CURLOPT_WS_OPTIONS`. -This option sets a bitmask of "flags" to libcurl, but at the moment, there is +This option sets a bitmask of flags to libcurl, but at the moment, there is only a single bit used. ## Raw mode By setting the `CURLWS_RAW_MODE` bit in the bitmask, libcurl delivers all -WebSocket traffic "raw" to the write callback instead of parsing the WebSocket +WebSocket traffic raw to the write callback instead of parsing the WebSocket traffic itself. This raw mode is intended for applications that maybe implemented WebSocket handling already and want to just move over to use libcurl for the transfer and maintain its own WebSocket logic. diff --git a/libcurl/ws/read.md b/libcurl/ws/read.md index 4abefe5b57..f3c9b01aa2 100644 --- a/libcurl/ws/read.md +++ b/libcurl/ws/read.md @@ -8,7 +8,7 @@ these two methods: When the `CURLOPT_CONNECT_ONLY` option is **not** set, WebSocket data is delivered to the write callback. -In the default "frame mode" (as opposed to "raw mode"), libcurl delivers parts +In the default frame mode (as opposed to raw mode), libcurl delivers parts of WebSocket fragments to the callback as data arrives. The application can then call `curl_ws_meta()` to get information about the specific frame that was passed to the callback. diff --git a/project.md b/project.md index 320120a965..02b10a5250 100644 --- a/project.md +++ b/project.md @@ -2,8 +2,8 @@ ![curl logo](curl-logo.jpg) -A funny detail about Open Source projects is that they are called "projects", -as if they were somehow limited in time or ever can get done. The cURL -"project" is a number of loosely coupled individual volunteers working on -writing software together with a common mission: to do reliable data transfers -with Internet protocols, as Open Source. +A funny detail about Open Source projects is that they are called *projects*, +as if they were somehow limited in time or ever can get done. The cURL project +is a number of loosely coupled individual volunteers working on writing +software together with a common mission: to do reliable data transfers with +Internet protocols, as Open Source. diff --git a/project/bugs.md b/project/bugs.md index 7546ad45af..d9f086de25 100644 --- a/project/bugs.md +++ b/project/bugs.md @@ -38,9 +38,9 @@ A good report explains what happened and what you thought was going to happen. Tell us exactly what versions of the different components you used and take us step by step through what you did to arrive at the problem. -After you submit a bug report, you can expect there to be follow-up -questions or perhaps requests that you try out various things so the developer -can narrow down the "suspects" and make sure your problem is properly located. +After you submit a bug report, you can expect there to be follow-up questions +or perhaps requests that you try out various things so the developer can +narrow down the suspects and make sure your problem is properly located. A bug report that is submitted then abandoned by the submitter risks getting closed if the developer fails to understand it, fails to reproduce it or faces diff --git a/project/etiquette.md b/project/etiquette.md index b0f0619087..7a153ad17c 100644 --- a/project/etiquette.md +++ b/project/etiquette.md @@ -24,14 +24,14 @@ Please do not reply to an existing message as a shortcut to post a message to the lists. Many mail programs and web archivers use information within mails to keep them -together as "threads", as collections of posts that discuss a certain +together as threads, as collections of posts that discuss a certain subject. If you do not intend to reply on the same or similar subject, do not just hit reply on an existing mail and change subject; create a new mail. ## Reply to the list -When replying to a message from the list, make sure that you do "group reply" -or "reply to all", and not just reply to the author of the single mail you +When replying to a message from the list, make sure that you do group reply +or reply to all, and not just reply to the author of the single mail you reply to. We are actively discouraging replying back to just a single person privately. Keep follow-ups on discussions on the list. @@ -83,7 +83,7 @@ leave out. A lengthy description can be found ## Digest -We allow subscribers to subscribe to the "digest" version of the mailing +We allow subscribers to subscribe to the digest version of the mailing lists. A digest is a collection of mails lumped together in one single mail. Should you decide to reply to a mail sent out as a digest, there are two diff --git a/project/name.md b/project/name.md index 685b4e6bdd..5fff371ee4 100644 --- a/project/name.md +++ b/project/name.md @@ -5,13 +5,13 @@ Naming things is hard. The tool was about uploading and downloading data specified with a URL. It was a client-side program (the 'c'), a URL client, and would show the data (by default). 'c' stands for Client and URL: **cURL**. The fact that it could also -be read as "see URL" helped. +be read as see URL helped. Nothing more was needed so the name was selected and we never looked back again. -Later on, someone suggested that curl could actually be a clever "recursive -acronym" (where the first letter in the acronym refers back to the same word): +Later on, someone suggested that curl could actually be a clever recursive +acronym (where the first letter in the acronym refers back to the same word): "Curl URL Request Library". While that is awesome, it was actually not the original thought. We wish we @@ -22,18 +22,17 @@ were not aware of them by the time our curl came to be. ## Pronunciation -Most of us pronounce "curl" with an initial k sound, just like the English -word curl. It rhymes with words like girl and earl. Merriam Webster has a -[short WAV file](https://media.merriam-webster.com/soundc11/c/curl0001.wav) to -help. +Most of us pronounce curl with an initial k sound, just like the English word +curl. It rhymes with words like girl and earl. Merriam Webster has a [short +WAV file](https://media.merriam-webster.com/soundc11/c/curl0001.wav) to help. ## Confusions and mix-ups -Soon after our curl was created another "curl" appeared that created a +Soon after our curl was created another curl appeared that created a programming language. That curl still [exists](https://www.curl.com). -Several libcurl bindings for various programming languages use the term "curl" -or "CURL" in part or completely to describe their bindings. Sometimes you find +Several libcurl bindings for various programming languages use the term curl +or CURL in part or completely to describe their bindings. Sometimes you find users talking about curl but referring to neither the command-line tool nor the library that is made by this project. diff --git a/project/releases.md b/project/releases.md index 5a2c7cd3eb..cd318c51e7 100644 --- a/project/releases.md +++ b/project/releases.md @@ -18,11 +18,11 @@ week cycle and unless some really serious and urgent problem shows up we stick to this schedule. We release on a Wednesday, and then again a Wednesday eight weeks later and so it continues. Non-stop. -For every release we tag the source code in the repository with "curl-release -version" and we update the [changelog](https://curl.se/changes.html). +For every release we tag the source code in the repository with the curl +version number and we update the [changelog](https://curl.se/changes.html). -We had done 210 curl releases by August 2022. The entire release history and -changelog is available in our [curl release +We had done a total of 253 releases by January 2024. The entire release +history and changelog is available in our [curl release log](https://curl.se/docs/releases.html). ## Release cycle diff --git a/protocols/network.md b/protocols/network.md index e2289aa9e6..bc53f18a68 100644 --- a/protocols/network.md +++ b/protocols/network.md @@ -9,11 +9,11 @@ addresses. These computers are also called hosts. ## Client and server -The computer, tablet or phone you sit in front of is usually called "the -client" and the machine out there somewhere that you want to exchange data -with is called "the server". The main difference between the client and the -server is in the roles they play. There is nothing that prevents the -roles from being reversed in a subsequent operation. +The computer, tablet or phone you sit in front of is usually called *the +client* and the machine out there somewhere that you want to exchange data +with is called *the server*. The main difference between the client and the +server is in the roles they play. There is nothing that prevents the roles +from being reversed in a subsequent operation. A transfer initiative is always taken by the client, as the server cannot contact the client but the client can contact the server. @@ -35,8 +35,8 @@ Once the client knows the hostname, it needs to figure out which IP addresses the host with that name has so that it can contact it. Converting the name to an IP address is called 'name resolving'. The name is -"resolved" to one or a set of addresses. This is usually done by a "DNS -server", DNS being like a big lookup table that can convert names to +*resolved* to one or a set of addresses. This is usually done by a *DNS +server*, DNS being like a big lookup table that can convert names to addresses—all the names on the Internet, really. The computer normally already knows the address of a computer that runs the DNS server as that is part of setting up the network. @@ -49,7 +49,7 @@ name does not exist. ## Establish a connection With one or more IP addresses for the host the client wants to contact, it -sends a "connect request". The connection it wants to establish is called a +sends a *connect request*. The connection it wants to establish is called a TCP ([Transmission Control Protocol](https://en.wikipedia.org/wiki/Transmission_Control_Protocol)) or [QUIC](https://en.wikipedia.org/wiki/QUIC) connection, which is like @@ -91,14 +91,14 @@ in the connect phase. ## Transfer data -When the connected metaphorical "string" is attached to the remote computer, -there is a *connection* established between the two machines. This -connection can then be used to exchange data. This exchange is done using -a "protocol", as discussed in the following chapter. +When the connected metaphorical *string* is attached to the remote computer, +there is a *connection* established between the two machines. This connection +can then be used to exchange data. This exchange is done using a *protocol*, +as discussed in the following chapter. Traditionally, a *download* is when data is transferred from a server to a client; conversely, an *upload* is when data is sent from the client to the -server. The client is "down here"; the server is "up there". +server. The client is *down here*; the server is *up there*. ## Disconnect diff --git a/protocols/protocols.md b/protocols/protocols.md index e031f0beb1..92a6bd3463 100644 --- a/protocols/protocols.md +++ b/protocols/protocols.md @@ -11,9 +11,9 @@ to send and so on. ## What protocols does curl support? -curl supports protocols that allow "data transfers" in either or both -directions. We usually also restrict ourselves to protocols which have a "URI -format" described in an RFC or at least is somewhat widely used, as curl works +curl supports protocols that allow data transfers in either or both +directions. We usually also restrict ourselves to protocols which have a URI +format described in an RFC or at least is somewhat widely used, as curl works primarily with URLs (URIs really) as the input key that specifies the transfer. @@ -72,7 +72,7 @@ show up over time, extensions that make sense to support. The interpretation of a protocol sometimes changes even if the spec remains the same. -The protocols mentioned in this chapter are all "Application Protocols", which +The protocols mentioned in this chapter are all *Application Protocols*, which means they are transferred over more lower level protocols, like TCP, UDP and TLS. They are also themselves protocols that change over time, get new features and get attacked so that new ways of handling security, etc., forces diff --git a/source/reportvuln.md b/source/reportvuln.md index 601cb9294d..169d8e103b 100644 --- a/source/reportvuln.md +++ b/source/reportvuln.md @@ -42,7 +42,7 @@ announcement. impact of the problem and suggests a release schedule. This discussion should involve the reporter as much as possible. -- The release of the information should be "as soon as possible" and is most +- The release of the information should be as soon as possible and is most often synced with an upcoming release that contains the fix. If the reporter, or anyone else, thinks the next planned release is too far away then a separate earlier release for security reasons should be considered. @@ -54,7 +54,7 @@ announcement. - Request a CVE number ([Common Vulnerabilities and Exposures](https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures)) using HackerOne's form for this purpose. -- Update the "security advisory" with the CVE number. +- Update the security advisory with the CVE number. - Consider informing [distros@openwall](https://oss-security.openwall.org/wiki/mailing-lists/distros) diff --git a/usingcurl/mqtt.md b/usingcurl/mqtt.md index 72a70f788b..452709775e 100644 --- a/usingcurl/mqtt.md +++ b/usingcurl/mqtt.md @@ -1,14 +1,14 @@ # MQTT -A plain "GET" subscribes to the topic and prints all published messages. -Doing a "POST" publishes the post data to the topic and exits. +A plain GET subscribes to the topic and prints all published messages. +Doing a POST publishes the post data to the topic and exits. -Subscribe to the temperature in the "home/bedroom" subject published by +Subscribe to the temperature in the `home/bedroom` subject published by example.com: curl mqtt://example.com/home/bedroom/temp -Send the value '75' to the "home/bedroom/dimmer" subject hosted by the +Send the value `75` to the `home/bedroom/dimmer` subject hosted by the example.com server: curl -d 75 mqtt://example.com/home/bedroom/dimmer diff --git a/usingcurl/persist.md b/usingcurl/persist.md index ff5cbf2388..b5d2d89952 100644 --- a/usingcurl/persist.md +++ b/usingcurl/persist.md @@ -13,7 +13,7 @@ The curl command-line tool can, however, only keep connections alive for as long as it runs, so as soon as it exits back to your command line it has to close down all currently open connections (and also free and clean up all the other caches it uses to decrease time of subsequent operations). We call the -pool of alive connections the "connection cache". +pool of alive connections the *connection cache*. If you want to perform N transfers or operations against the same host or same base URL, you could gain a lot of speed by trying to do them in as few curl diff --git a/usingcurl/proxies/captive.md b/usingcurl/proxies/captive.md index acb707539f..62c26c2e60 100644 --- a/usingcurl/proxies/captive.md +++ b/usingcurl/proxies/captive.md @@ -3,15 +3,15 @@ These are not proxies but they are blocking the way between you and the server you want to access. -A "captive portal" is one of these systems that are popular to use in hotels, +A captive portal is one of these systems that are popular to use in hotels, airports and for other sorts of network access to a larger audience. The -portal "captures" all network traffic and redirects you to a login webpage +portal captures all network traffic and redirects you to a login webpage until you have either clicked OK and verified that you have read their conditions or perhaps even made sure that you have paid plenty of money for the right to use the network. curl's traffic is of course also captured by such portals and often the best -way is to use a browser to accept the conditions and "get rid of" the portal +way is to use a browser to accept the conditions and get rid of the portal since from then on they often allow all other traffic originating from that same machine (MAC address) for a period of time. diff --git a/usingcurl/proxies/http.md b/usingcurl/proxies/http.md index 47d4f622e4..6893fe5eca 100644 --- a/usingcurl/proxies/http.md +++ b/usingcurl/proxies/http.md @@ -35,24 +35,24 @@ breaking the encryption: ## Non-HTTP protocols over an HTTP proxy -An "HTTP proxy" means the proxy itself speaks HTTP. HTTP proxies are primarily +An HTTP proxy means the proxy itself speaks HTTP. HTTP proxies are primarily used to proxy HTTP but it is also fairly common that they support other protocols as well. In particular, FTP is fairly commonly supported. -When talking FTP "over" an HTTP proxy, it is usually done by more or less -pretending the other protocol works like HTTP and asking the proxy to "get -this URL" even if the URL is not using HTTP. This distinction is important -because it means that when sent over an HTTP proxy like this, curl does not -really speak FTP even though given an FTP URL; thus FTP-specific features do -not work: +When talking FTP over an HTTP proxy, it is usually done by more or less +pretending the other protocol works like HTTP and asking the proxy to get this +URL even if the URL is not using HTTP. This distinction is important because +it means that when sent over an HTTP proxy like this, curl does not really +speak FTP even though given an FTP URL; thus FTP-specific features do not +work: curl -x http://proxy.example.com:80 ftp://ftp.example.com/file.txt -What you can do instead then, is to "tunnel through" the HTTP proxy. +What you can do instead then, is to tunnel through the HTTP proxy. ## HTTP proxy tunneling -Most HTTP proxies allow clients to "tunnel through" it to a server on the other +Most HTTP proxies allow clients to tunnel through it to a server on the other side. That is exactly what's done every time you use HTTPS through the HTTP proxy. @@ -66,7 +66,7 @@ the proxy administrators know). Still, assuming that the HTTP proxy allows it, you can ask it to tunnel through to a remote server on any port number so you can do other protocols -"normally" even when tunneling. You can do FTP tunneling like this: +normally even when tunneling. You can do FTP tunneling like this: curl -p -x http://proxy.example.com:80 ftp://ftp.example.com/file.txt diff --git a/usingcurl/tls/enable.md b/usingcurl/tls/enable.md index ef53b343ff..528d2bb130 100644 --- a/usingcurl/tls/enable.md +++ b/usingcurl/tls/enable.md @@ -1,15 +1,14 @@ # Enable TLS -curl supports the TLS version of many protocols. HTTP has HTTPS, -FTP has FTPS, LDAP has LDAPS, POP3 has POP3S, IMAP has IMAPS and SMTP has -SMTPS. +curl supports the TLS version of many protocols. HTTP has HTTPS, FTP has FTPS, +LDAP has LDAPS, POP3 has POP3S, IMAP has IMAPS and SMTP has SMTPS. If the server side supports it, you can use the TLS version of these protocols with curl. There are two general approaches to do TLS with protocols. One of them is to speak TLS already from the first connection handshake while the other is to -"upgrade" the connection from plain-text to TLS using protocol specific +upgrade the connection from plain-text to TLS using protocol specific instructions. With curl, if you explicitly specify the TLS version of the protocol (the one diff --git a/usingcurl/tls/pinning.md b/usingcurl/tls/pinning.md index f32501dc13..5b48277a90 100644 --- a/usingcurl/tls/pinning.md +++ b/usingcurl/tls/pinning.md @@ -1,7 +1,7 @@ # Certificate pinning TLS certificate pinning is a way to verify that the public key used to sign -the servers certificate has not changed. It is "pinned". +the servers certificate has not changed. It is *pinned*. When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this certificate and diff --git a/usingcurl/tls/verify.md b/usingcurl/tls/verify.md index 2cdce8e71f..bf5cad5f17 100644 --- a/usingcurl/tls/verify.md +++ b/usingcurl/tls/verify.md @@ -5,8 +5,8 @@ be certain that you are communicating with the **correct** host. If we do not know that, we could just as well be talking with an impostor that just *appears* to be who we think it is. -To check that it communicates with the right TLS server, curl uses a "CA -store" - a set of certificates to verify the signature of the server's +To check that it communicates with the right TLS server, curl uses a CA +store - a set of certificates to verify the signature of the server's certificate. All servers provide a certificate to the client as part of the TLS handshake and all public TLS-using servers have acquired that certificate from an established Certificate Authority. diff --git a/usingcurl/tls/versions.md b/usingcurl/tls/versions.md index 719450c07f..0c59600e01 100644 --- a/usingcurl/tls/versions.md +++ b/usingcurl/tls/versions.md @@ -5,7 +5,7 @@ was the first widespread version used on the Internet but that was deemed insecure already a long time ago. SSL version 3 took over from there, and it too has been deemed not safe enough for use. -TLS version 1.0 was the first "standard". RFC 2246 was published 1999. TLS 1.1 +TLS version 1.0 was the first standard. RFC 2246 was published 1999. TLS 1.1 came out in 2006, further improving security, followed by TLS 1.2 in 2008. TLS 1.2 came to be the gold standard for TLS for a decade. @@ -14,7 +14,7 @@ August 2018. This is the most secure and fastest TLS version as of date. It is however so new that a lot of software, tools and libraries do not yet support it. -curl is designed to use a "safe version" of SSL/TLS by default. It means that +curl is designed to use a secure version of SSL/TLS by default. It means that it does not negotiate SSLv2 or SSLv3 unless specifically told to, and in fact several TLS libraries no longer provide support for those protocols so in many cases curl is not even able to speak those protocol versions unless you make a diff --git a/usingcurl/uploads.md b/usingcurl/uploads.md index df75be3019..7c2fc14b27 100644 --- a/usingcurl/uploads.md +++ b/usingcurl/uploads.md @@ -9,7 +9,7 @@ ways of uploading data. You can upload data using one of these protocols: FILE, FTP, FTPS, HTTP, HTTPS, IMAP, IMAPS, SCP, SFTP, SMB, SMBS, SMTP, SMTPS and TFTP. -## HTTP offers several "uploads" +## HTTP offers several uploads HTTP, and its bigger brother HTTPS, offer several different ways to upload data to a server and curl provides easy command-line options to do it the @@ -37,7 +37,7 @@ Read the detailed description on how to do this with curl in the Multipart formposts are also used in HTML forms on websites; typically when there is a file upload involved. This type of upload is also an HTTP POST but it sends the data formatted according to some special rules, which is what the -"multipart" name means. +multipart name means. Since it sends the data formatted completely differently, you cannot select which type of POST to use at your own whim but it entirely depends on what the @@ -76,10 +76,10 @@ Learn much more about FTPing in the [FTP with curl](../ftp.md) section. ## SMTP uploads -You may not consider sending an email to be "uploading", but to curl it is. -You upload the mail body to the SMTP server. With SMTP, you also need to -include all the mail headers you need (`To:`, `From:`, `Date:`, etc.) in the mail -body as curl does not add any at all. +You may not consider sending an email to be uploading, but to curl it is. You +upload the mail body to the SMTP server. With SMTP, you also need to include +all the mail headers you need (`To:`, `From:`, `Date:`, etc.) in the mail body +as curl does not add any at all. curl -T mail smtp://mail.example.com/ --mail-from user@example.com diff --git a/usingcurl/version.md b/usingcurl/version.md index f1f359e891..94ddeac074 100644 --- a/usingcurl/version.md +++ b/usingcurl/version.md @@ -36,13 +36,12 @@ The meaning of the four lines? ## Line 1: curl The first line starts with `curl` and first shows the main version number of -the tool. Then follows the "platform" the tool was built for within -parentheses and the libcurl version. Those three fields are common for all -curl builds. +the tool. Then follows the platform the tool was built for within parentheses +and the libcurl version. Those three fields are common for all curl builds. If the curl version number has `-DEV` appended to it, it means the version is built straight from a in-development source code and it is not an officially -released and "blessed" version. +released and blessed version. The rest of this line contains names of third party components this build of curl uses, often with their individual version number next to it with a slash @@ -115,7 +114,7 @@ Features that can be present there: - **NTLM** - NTLM authentication is supported. - **NTLM_WB** - NTLM authentication is supported. - **PSL** - Public Suffix List (PSL) is available and means that this curl has - been built with knowledge about "public suffixes", used for cookies. + been built with knowledge about *public suffixes*, used for cookies. - **SPNEGO** - SPNEGO authentication is supported. - **SSL** - SSL versions of various protocols are supported, such as HTTPS, FTPS, POP3S and so on.