Download files from the internet in your Linux terminal. Get the most out of the wget command with our new cheat sheet.
Wget is a free utility to download files from the web. It gets data from the Internet and saves it to a file or displays it in your terminal. This is literally also what web browsers do, such as Firefox or Chromium, except by default, they render the information in a graphical window and usually require a user to be actively controlling them. The wget utility is designed to be non-interactive, meaning you can script or schedule wget to download files whether you’re at your computer or not.
You can download a file with wget by providing a link to a specific URL. If you provide a URL that defaults to index.html, then the index page gets downloaded. By default, the file is downloaded into a file of the same name in your current working directory.
You can make wget send the data to standard out (stdout) instead by using the –output-document with a dash - character:
You can use the –output-document option (-O for short) to name your download whatever you want:
If you’re downloading a very large file, you might find that you have to interrupt the download. With the –continue (-c for short), wget can determine where the download left off and continue the file transfer. That means the next time you download a 4 GB Linux distribution ISO you don’t ever have to go back to the start when something goes wrong.
If it’s not one big file but several files that you need to download, wget can help you with that. Assuming you know the location and filename pattern of the files you want to download, you can use Bash syntax to specify the start and end points between a range of integers to represent a sequence of filenames:
You can download an entire site, including its directory structure, using the
--mirror option. This option is the same as running
--recursive --level inf --timestamping --no-remove-listing,
which means it’s infinitely recursive, so you’re getting everything on the domain you specify.
Depending on how old the website is, that could mean you’re getting a lot more content than you realize.
If you’re using wget to archive a site, then the options
--no-cookies --page-requisites --convert-links are also useful to ensure that every page is fresh, complete,
and that the site copy is more or less self-contained.
Protocols used for data exchange have a lot of metadata embedded in the packets computers send to communicate.
HTTP headers are components of the initial portion of data. When you browse a website, your browser sends HTTP request headers.
--debug option to see what header information wget sends with each request:
You can modify your request header with the
--header option. For instance, it’s sometimes useful to mimic a specific browser,
either for testing or to account for poorly coded sites that only work correctly for specific user agents.
To identify as Microsoft Edge running on Windows:
$ wget --debug --header="User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36 Edg/91.0.864.59" http://example.com
You can also masquerade as a specific mobile device:
In the same way header information is sent with browser requests, header information is also included in responses. You can see response headers with the
A 200 response code means that everything has worked as expected. A 301 response, on the other hand, means that an URL has been moved permanently to a different location. It’s a common way for a website admin to relocate content while leaving a “trail” so people visiting the old location can still find it. By default, wget follows redirects, and that’s probably what you normally want it to do.
However, you can control what wget does when it encounters a 301 response with the
--max-redirect option. You can set it to
0 to follow no redirects:
Alternately, you can set it to some other number to control how many redirects wget follows.
--max-redirect option is useful for looking at shortened URLs before actually visiting them. Shortened URLs can be useful for print media, in which users
can’t just copy and paste a long URL, or on social networks with character limits
(this isn’t as much of an issue on a modern and open source social network like Mastodon).
However, they can also be a little dangerous because their destination is, by nature, concealed. By combining the
--head option to view just the HTTP headers, and
--location option to unravel the final destination of an URL, you can peek into a shortened URL without loading the full resource:
The penultimate line of output, starting with Location, reveals the intended destination.
Once you practice thinking about the process of exploring the web as a single command, wget becomes a fast and efficient way to pull information you need from the Internet without bothering with a graphical interface. To help you build it into your usual workflow, we’ve created a cheat sheet with common wget uses and syntax, including an overview of using it to query an API. Download the Linux wget cheat sheet here.