{"id":1009,"date":"2025-12-03T10:15:19","date_gmt":"2025-12-03T10:15:19","guid":{"rendered":"https:\/\/www.howlinux.com\/?p=1009"},"modified":"2025-12-03T10:15:31","modified_gmt":"2025-12-03T10:15:31","slug":"linux-wget-command","status":"publish","type":"post","link":"https:\/\/www.linuxfordevices.com\/tutorials\/linux\/linux-wget-command","title":{"rendered":"The Linux wget Command &#8211; Linux Download Command"},"content":{"rendered":"\n<p>The Linux wget command is a command-line utility that downloads files from the internet using HTTP, HTTPS, and FTP protocols. It&#8217;s designed to work non-interactively, meaning it can run in the background while you&#8217;re logged out\u2014making it perfect for retrieving large files, automating downloads in scripts, and handling unreliable network connections. Most Linux distributions include wget by default, but if you need to install it, here&#8217;s how to do it across different distros.<\/p>\n\n\n\n<p><strong>Ubuntu\/Debian-based distros: (<a class=\"rank-math-link rank-math-link\" href=\"https:\/\/www.linuxfordevices.com\/tutorials\/linux\/apt-vs-apt-get-command-linux\">difference between apt and apt-get<\/a>)<\/strong><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo apt install wget\n<\/pre><\/div>\n\n\n<p><strong>Fedora-based distros:<\/strong><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo dnf install wget\n<\/pre><\/div>\n\n\n<p><strong>Red Hat-based distros:<\/strong><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo yum install wget\n<\/pre><\/div>\n\n\n<p><strong>Arch Linux-based distros<\/strong>:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nsudo pacman -S wget\n<\/pre><\/div>\n\n\n<p><strong>Alpine Linux:<\/strong><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\napk add wget\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">What Makes wget Different from Other Download Tools?<\/h2>\n\n\n\n<p>The wget command stands apart from other download utilities because of its robust feature set designed specifically for automation and unreliable networks. Unlike browser-based downloads or simpler tools, wget was built with scripting and unattended operations in mind.<\/p>\n\n\n\n<p>Here are the key capabilities that make wget indispensable for system administrators and developers:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Runs downloads in the background without requiring an active terminal session<\/li>\n\n\n\n<li>Automatically resumes interrupted downloads from where they left off<\/li>\n\n\n\n<li>Creates complete local mirrors of websites for offline browsing<\/li>\n\n\n\n<li>Crawls through websites to identify broken links and 404 errors<\/li>\n\n\n\n<li>Controls bandwidth usage to prevent overwhelming your network connection<\/li>\n\n\n\n<li>Handles authentication for password-protected resources<\/li>\n\n\n\n<li>Downloads multiple files from a list with a single command<\/li>\n<\/ul>\n\n\n\n<p><strong>The basic syntax of the wget command is:<\/strong><\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nwget &#x5B;options] &lt;URL&gt;\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\">How Do I Download Files with wget?<\/h2>\n\n\n\n<p>The simplest way to use wget is to provide it with a URL. Let&#8217;s download the Ubuntu Server ISO as a practical example. This demonstrates how wget handles large file downloads and displays progress information.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nwget https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>When you run this command, wget performs several operations in sequence. First, it resolves the domain name to an IP address. Then it establishes a connection to the server and begins downloading. The output shows you detailed information about the transfer, including the server connection details, file size, download speed, and an estimated time to completion with a progress bar.<\/p>\n\n\n\n<p>By default, wget saves the file with its original filename from the server. But what if you want to save it with a different name? That&#8217;s where the <code>-O<\/code> (capital letter O) option comes in handy.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Saving Downloads with a Custom Filename<\/h3>\n\n\n\n<p>The <code>-O<\/code> option lets you specify exactly what you want to call the downloaded file. This is particularly useful when downloading files with cryptic names or when you want to maintain a specific naming convention in your scripts.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso -O ubuntu-server.iso\n<\/pre><\/div>\n\n\n<p>This downloads the Ubuntu ISO but saves it as &#8220;ubuntu-server.iso&#8221; in your current directory. The <code>-O<\/code> option is especially valuable when working with dynamically generated URLs that might produce files like &#8220;download.php?id=12345&#8221; \u2013 you can give them meaningful names instead.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Specifying a Different Download Directory<\/h3>\n\n\n\n<p>Rather than downloading files to your current directory, you can direct wget to save files in a specific location using the <code>-P<\/code> option. This is cleaner for organizing downloads and essential when automating downloads in scripts.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -P ~\/Downloads https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>This command downloads the ISO directly into your Downloads folder. The directory must exist before running the command, or wget will return an error. You can create the directory first with <code>mkdir -p ~\/Downloads<\/code> if needed.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Do I Resume an Interrupted Download?<\/h2>\n\n\n\n<p>Network interruptions happen. Your WiFi drops, your SSH connection times out, or you accidentally hit Ctrl+C. The <code>-c<\/code> option (short for &#8220;continue&#8221;) is what makes wget particularly valuable for large file downloads over unstable connections.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -c https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>Here&#8217;s how this works in practice: when you run wget with the <code>-c<\/code> option, it checks if a partially downloaded file already exists in the current directory. If it finds one, wget contacts the server and requests only the remaining portion of the file, starting from where the previous download stopped. The server sends an HTTP Range header indicating it will resume from byte position X, and the download continues seamlessly.<\/p>\n\n\n\n<p>This is especially valuable when downloading multi-gigabyte files like Linux ISOs, database backups, or video files. Instead of starting over from scratch each time your connection drops, you simply re-run the same wget command with <code>-c<\/code> and pick up where you left off. The savings in time and bandwidth can be significant, particularly on slower connections or when downloading from geographically distant servers.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Can I Control wget&#8217;s Output Display?<\/h2>\n\n\n\n<p>While wget&#8217;s detailed output is helpful for interactive use, it becomes noise when running downloads in scripts or cron jobs. The good news is that wget provides several options to control exactly what information it displays.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Completely Quiet Mode<\/h3>\n\n\n\n<p>The <code>-q<\/code> flag stands for &#8220;quiet&#8221; and suppresses all output completely. This is useful for scripts where you only care about the exit status (0 for success, non-zero for failure).<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nwget -q &lt;URL&gt;\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\">Non-Verbose Mode<\/h3>\n\n\n\n<p>The <code>-nv<\/code> option provides a middle ground\u2014it turns off verbose output but still displays error messages and completion notices. This is better for logging purposes since you&#8217;ll know if something went wrong.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nwget -nv &lt;URL&gt;\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\">Showing Only the Progress Bar<\/h3>\n\n\n\n<p>Sometimes you want to see download progress without all the verbose server connection details. The <code>--show-progress<\/code> option combined with <code>-q<\/code> gives you exactly that\u2014just a clean progress bar.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -q --show-progress https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>If you&#8217;re using an older version of wget (pre-1.16), you might need to add <code>--progress=bar:force<\/code> to ensure the progress bar displays correctly. The combination of these flags is particularly useful in terminal multiplexers like tmux or screen, where you want to monitor progress without cluttering your terminal history.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Do I Download Multiple Files at Once?<\/h2>\n\n\n\n<p>Rather than running wget multiple times, you can download several files in a single command by listing all URLs in a text file. This approach is cleaner, more maintainable, and perfect for automation.<\/p>\n\n\n\n<p>First, create a text file with one URL per line. Let&#8217;s call it <code>downloads.txt<\/code>:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncat &gt; downloads.txt &lt;&lt; EOF\nhttps:\/\/wordpress.org\/latest.tar.gz\nhttps:\/\/github.com\/docker\/compose\/releases\/download\/v2.23.0\/docker-compose-linux-x86_64\nhttps:\/\/go.dev\/dl\/go1.21.5.linux-amd64.tar.gz\nEOF\n<\/pre><\/div>\n\n\n<p>Now use the <code>-i<\/code> option (short for &#8220;input file&#8221;) to pass this list to wget:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -i downloads.txt\n<\/pre><\/div>\n\n\n<p>wget processes each URL sequentially, downloading them one after another. You can combine this with other options we&#8217;ve discussed. For example, <code>wget -i downloads.txt -P ~\/Downloads -q --show-progress<\/code> downloads all files to your Downloads folder while showing only progress bars.<\/p>\n\n\n\n<p>This method is particularly powerful for bulk downloads of software packages, documentation sets, or media files. You can even generate the URL list programmatically with scripts if you need to download files following a pattern.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Can I Limit Download Speed with wget?<\/h2>\n\n\n\n<p>When you&#8217;re sharing a network connection or running background downloads, limiting wget&#8217;s bandwidth usage prevents it from saturating your connection and affecting other applications. The <code>--limit-rate<\/code> option gives you precise control over download speed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Setting Bandwidth Limits<\/h3>\n\n\n\n<p>You can express the rate limit in bytes, kilobytes (k suffix), or megabytes (m suffix). Here&#8217;s a practical example limiting the download to 500 kilobytes per second:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --limit-rate=500k https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>For slower connections or when you want to be very conservative with bandwidth, you might limit to 100k or even 50k. On faster connections where you just want to leave some bandwidth for other applications, you might set it to 2m or 5m.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Understanding How Rate Limiting Works<\/h3>\n\n\n\n<p>The <code>--limit-rate<\/code> option sets an average bandwidth target rather than a hard ceiling. wget might briefly exceed the specified rate as it fills network buffers, but it will throttle back to maintain the average over time. This approach works better with how TCP\/IP networking actually functions.<\/p>\n\n\n\n<p>One limitation: you cannot change the rate limit while a download is in progress. However, you can stop the download (Ctrl+C), then resume it with a different rate limit using <code>-c<\/code>:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -c --limit-rate=200k https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>This resumes the download from where you left off but now limits the speed to 200 KB\/s. This technique is useful when network conditions change\u2014for example, if other people start using your connection during the day, you can pause and resume with a lower rate limit.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Do I Download from Password-Protected Sites?<\/h2>\n\n\n\n<p>Many web servers and FTP sites require authentication before allowing downloads. wget handles HTTP Basic Authentication and FTP authentication through several methods, each with different security trade-offs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">HTTP\/HTTPS Authentication<\/h3>\n\n\n\n<p>For HTTP or HTTPS sites, use the <code>--http-user<\/code> and <code>--http-password<\/code> options:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --http-user=myusername --http-password=mypassword https:\/\/example.com\/protected\/file.zip\n<\/pre><\/div>\n\n\n<p>However, this approach has a security problem: the password appears in your terminal command, in your shell history, and is visible to anyone who runs <code>ps<\/code> while wget is running. For better security, use the <code>--ask-password<\/code> option:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --http-user=myusername --ask-password https:\/\/example.com\/protected\/file.zip\n<\/pre><\/div>\n\n\n<p>This prompts you to enter the password interactively, keeping it out of logs and process listings. The password won&#8217;t be echoed to the screen as you type it.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">FTP Authentication<\/h3>\n\n\n\n<p>For FTP downloads, use <code>--ftp-user<\/code> and <code>--ftp-password<\/code>:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --ftp-user=ftpuser --ftp-password=ftppass ftp:\/\/ftp.example.com\/files\/archive.tar.gz\n<\/pre><\/div>\n\n\n<p>You can also embed credentials directly in the URL, though this has the same security concerns:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget ftp:\/\/username:password@ftp.example.com\/files\/archive.tar.gz\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\">Storing Credentials Securely<\/h3>\n\n\n\n<p>For automated scripts that need authentication, store credentials in a <code>.wgetrc<\/code> file in your home directory. This keeps passwords out of your scripts and command history.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ncat &gt; ~\/.wgetrc &lt;&lt; EOF\nhttp_user=myusername\nhttp_password=mypassword\nftp_user=ftpuser\nftp_password=ftppass\nEOF\n\nchmod 600 ~\/.wgetrc\n<\/pre><\/div>\n\n\n<p>The <code>chmod 600<\/code> command ensures only you can read this file. Now wget will automatically use these credentials without needing them in your commands. For sensitive production environments, consider using more sophisticated secrets management tools rather than storing passwords in plain text files.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Can I Mirror an Entire Website with wget?<\/h2>\n\n\n\n<p>One of wget&#8217;s most powerful features is its ability to recursively download entire websites, creating local copies suitable for offline browsing. This is invaluable for archiving content, creating offline documentation mirrors, or analyzing website structure.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Basic Website Mirroring<\/h3>\n\n\n\n<p>The <code>-m<\/code> option (short for &#8220;mirror&#8221;) enables recursive downloading. Combined with other options, you can create a complete, browsable local copy:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -m -p -k -E https:\/\/example.com\n<\/pre><\/div>\n\n\n<p>Let me break down what each option does:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>-m<\/code> (mirror): Enables recursive downloading with infinite recursion depth, timestamps preservation, and other settings optimal for mirroring<\/li>\n\n\n\n<li><code>-p<\/code> (page-requisites): Downloads all files necessary to properly display HTML pages, including images, CSS files, and JavaScript<\/li>\n\n\n\n<li><code>-k<\/code> (convert-links): Converts links in downloaded HTML files to point to local files instead of the original URLs, making the site browsable offline<\/li>\n\n\n\n<li><code>-E<\/code> (adjust-extension): Adds proper file extensions like .html to files that don&#8217;t have them, which helps local web browsers render them correctly<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Controlling Mirror Depth and Scope<\/h3>\n\n\n\n<p>Without constraints, wget will follow every link it finds, potentially downloading far more than you intended. Use these options to control the scope:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -m -p -k -E -l 3 --no-parent https:\/\/example.com\/documentation\/\n<\/pre><\/div>\n\n\n<p>The <code>-l 3<\/code> option limits recursion to three levels deep from the starting URL. The <code>--no-parent<\/code> option prevents wget from following links to parent directories, keeping your mirror focused on the specified path. This is essential when you only want to mirror a section of a site, like documentation, without grabbing the entire domain.<\/p>\n\n\n\n<p><strong>Important warning:<\/strong> Be cautious when mirroring large sites. A site with thousands of pages can take hours to download and consume significant disk space. Always check robots.txt and respect the website&#8217;s terms of service. Many sites explicitly prohibit automated scraping. Use the <code>--wait<\/code> option to add delays between requests to avoid overwhelming servers:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -m -p -k -E --wait=2 https:\/\/example.com\n<\/pre><\/div>\n\n\n<p>This adds a 2-second wait between each request, making your mirror operation much more respectful to the server.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Can I Find Broken Links on a Website?<\/h2>\n\n\n\n<p>Website owners and developers need to identify broken links (404 errors) across their sites. wget&#8217;s <code>--spider<\/code> mode was designed specifically for this purpose\u2014it crawls through links without actually downloading content, checking only if resources exist.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Running a Link Check<\/h3>\n\n\n\n<p>The <code>--spider<\/code> option combined with recursive crawling checks every link on a site:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --spider -r -l 5 https:\/\/yourwebsite.com -o link-check.log\n<\/pre><\/div>\n\n\n<p>Here&#8217;s what&#8217;s happening:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>--spider<\/code>: Enables spider mode\u2014wget only checks if files exist without downloading them<\/li>\n\n\n\n<li><code>-r<\/code>: Enables recursive link following<\/li>\n\n\n\n<li><code>-l 5<\/code>: Limits recursion depth to 5 levels to keep the check manageable<\/li>\n\n\n\n<li><code>-o link-check.log<\/code>: Saves all output to a log file for later analysis<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\">Analyzing the Results<\/h3>\n\n\n\n<p>After the spider completes, search the log file for 404 errors:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ngrep -B 2 &#039;404&#039; link-check.log | grep &#039;http&#039; | cut -d &#039; &#039; -f 4 | sort -u\n<\/pre><\/div>\n\n\n<p>This pipeline extracts just the URLs that returned 404 errors, sorted and deduplicated. You&#8217;ll get a clean list of broken links to fix. The <code>-B 2<\/code> flag shows two lines before each 404 match, giving you context about which page contains the broken link.<\/p>\n\n\n\n<p>For larger sites, consider adding <code>--no-parent<\/code> to restrict the spider to specific sections, and use <code>--wait=1<\/code> to avoid hammering your server with rapid-fire requests.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Do I Run Downloads in the Background?<\/h2>\n\n\n\n<p>For long-running downloads, keeping your terminal open is impractical. The <code>-b<\/code> option sends wget to the background, letting you close your terminal or continue working while the download proceeds.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -b https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>When you run this command, wget immediately returns control to your terminal and displays a message like &#8220;Continuing in background, pid 12345.&#8221; The process ID (12345 in this example) is important\u2014you can use it to check the download status or stop it if needed.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Monitoring Background Downloads<\/h3>\n\n\n\n<p>wget automatically logs background download progress to a file named <code>wget-log<\/code> in the current directory. Monitor it in real-time with:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\ntail -f wget-log\n<\/pre><\/div>\n\n\n<p>You can also specify a custom log file location:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -b -o ~\/downloads\/ubuntu.log https:\/\/releases.ubuntu.com\/22.04\/ubuntu-22.04.3-live-server-amd64.iso\n<\/pre><\/div>\n\n\n<p>To stop a background download, use the <code>kill<\/code> command with the process ID:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nkill 12345\n<\/pre><\/div>\n\n\n<p>Background downloads are particularly useful in combination with tools like <code>screen<\/code> or <code>tmux<\/code>, which let you start downloads over SSH and safely disconnect without interrupting them.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How Do I Handle HTTPS Certificate Errors?<\/h2>\n\n\n\n<p>Sometimes wget refuses to download from HTTPS sites due to certificate validation issues. While you should generally fix the underlying certificate problem, there are legitimate cases where you might need to bypass these checks\u2014like when dealing with self-signed certificates in development environments.<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --no-check-certificate https:\/\/self-signed.example.com\/file.zip\n<\/pre><\/div>\n\n\n<p>The <code>--no-check-certificate<\/code> option disables SSL certificate validation. Use this cautiously and only when you trust the source, as it makes you vulnerable to man-in-the-middle attacks. For production environments, always fix certificate issues properly rather than bypassing validation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Advanced wget Command Techniques Worth Knowing<\/h2>\n\n\n\n<p>Let me show you a couple of advanced techniques that may come handy when you start using the wget command regularly. <\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Downloading Files with Timestamping<\/h3>\n\n\n\n<p>The <code>-N<\/code> option enables timestamping, which only downloads files if they&#8217;re newer than your local copy. This is perfect for keeping local mirrors synchronized:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget -N https:\/\/example.com\/daily-data.csv\n<\/pre><\/div>\n\n\n<p>If the remote file hasn&#8217;t changed since your last download, wget skips it entirely. This saves bandwidth and time when running regular sync jobs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Using wget with Retries<\/h3>\n\n\n\n<p>For unreliable connections, increase the retry count and add wait time between attempts:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --tries=10 --retry-connrefused --waitretry=5 https:\/\/unstable-server.com\/file.tar.gz\n<\/pre><\/div>\n\n\n<p>This attempts the download up to 10 times, waits 5 seconds between retries, and even retries when the server explicitly refuses connections.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Downloading with Custom User-Agent Strings<\/h3>\n\n\n\n<p>Some servers block wget&#8217;s default user-agent. Impersonate a browser if necessary:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\nwget --user-agent=&quot;Mozilla\/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit\/537.36&quot; https:\/\/example.com\/file.pdf\n<\/pre><\/div>\n\n\n<p>This makes wget identify itself as a Chrome browser on Windows, bypassing basic user-agent filters.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The <a href=\"https:\/\/www.linuxfordevices.com\/tutorials\/linux\/linux-commands-cheat-sheet\" data-type=\"post\" data-id=\"5034\">Linux <\/a>wget command transforms from a simple download tool into a sophisticated file transfer utility when you understand its full capabilities. Whether you&#8217;re automating downloads in scripts, mirroring entire websites, managing bandwidth-constrained connections, or handling authenticated resources, wget provides the reliability and flexibility needed for production environments.<\/p>\n\n\n\n<p>The key to mastering wget is understanding how to combine options effectively. Rate limiting with resume capabilities, background downloads with authentication, recursive mirroring with depth controls\u2014these combinations handle virtually any download scenario you&#8217;ll encounter.<\/p>\n\n\n\n<p>For deeper exploration, the <a href=\"https:\/\/www.linuxfordevices.com\/tutorials\/linux\/man-command-in-linux-unix\" class=\"rank-math-link\">man command in Linux<\/a> provides comprehensive documentation: <code>man wget<\/code>. You can also use <code>wget --help<\/code> for a quick reference of available options.<\/p>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1716448925387\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">What is the Linux wget command used for?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>The wget command in Linux is a non-interactive command-line utility used to download files from web servers. It supports HTTP, HTTPS, and FTP protocols and can work in the background, making it ideal for automated downloads, scripts, and handling unreliable network connections.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716448940702\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How can I use wget to download files?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>To download a file with wget, simply provide the URL: <code>wget https:\/\/example.com\/file.zip<\/code>. You can save it with a different name using <code>wget -O newname.zip https:\/\/example.com\/file.zip<\/code> or specify a different directory with <code>wget -P ~\/Downloads https:\/\/example.com\/file.zip<\/code>.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716448953618\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">Can wget download multiple files at once?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes, wget can download multiple files by reading URLs from a text file using the -i option: <code>wget -i urllist.txt<\/code>. Create a text file with one URL per line, and wget will download each file sequentially.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716449031765\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How do I limit the download speed with wget?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Use the &#8211;limit-rate option to control download speed. For example, <code>wget --limit-rate=500k https:\/\/example.com\/file.iso<\/code> limits the download to 500 kilobytes per second. You can specify rates in bytes, kilobytes (k), or megabytes (m).<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716449046775\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How can I specify the directory where wget should save the downloaded files?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Use the -P option followed by the directory path: <code>wget -P ~\/Downloads https:\/\/example.com\/file.zip<\/code>. This downloads the file directly into the specified directory. The directory must exist before running the command.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716449062170\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How do I install wget on my system?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Installation varies by distribution. On Ubuntu\/Debian, use <code>sudo apt install wget<\/code>. On Fedora, use <code>sudo dnf install wget<\/code>. On RHEL\/CentOS, use <code>sudo yum install wget<\/code>. On Arch Linux, use <code>sudo pacman -S wget<\/code>.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716449088400\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How do I resume an interrupted download with wget?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Use the -c option (continue) to resume an interrupted download: <code>wget -c https:\/\/example.com\/largefile.iso<\/code>. wget checks if a partially downloaded file exists and resumes from where it stopped, saving bandwidth and time.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1716449113864\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">What does the message &#8220;http request sent, awaiting response&#8221; mean when using wget?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>This message indicates that wget has successfully sent an HTTP request to the server and is waiting for the server&#8217;s response before proceeding with the download. It&#8217;s a normal part of the connection process.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-new-01\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">How do I download from password-protected websites with wget?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>For HTTP\/HTTPS sites, use <code>wget --http-user=username --ask-password https:\/\/example.com\/file.zip<\/code>. For FTP sites, use <code>wget --ftp-user=username --ftp-password=password ftp:\/\/example.com\/file.tar.gz<\/code>. The &#8211;ask-password option prompts for the password securely without storing it in your command history.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-new-02\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \">Can wget mirror entire websites for offline viewing?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes, use <code>wget -m -p -k -E https:\/\/example.com<\/code> to create a complete local mirror. The -m option enables mirroring mode, -p downloads page requisites (images, CSS, JS), -k converts links for offline browsing, and -E adds proper file extensions. Add &#8211;wait=2 to be respectful to servers.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>The Linux wget command is a command-line utility that downloads files from the internet using HTTP, HTTPS, and FTP protocols. It&#8217;s designed to work non-interactively, meaning it can run in the background while you&#8217;re logged out\u2014making it perfect for retrieving large files, automating downloads in scripts, and handling unreliable network connections. Most Linux distributions include [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":1021,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"site-sidebar-layout":"default","site-content-layout":"default","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"default","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[6],"tags":[],"class_list":["post-1009","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-linux"],"_links":{"self":[{"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/posts\/1009","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/comments?post=1009"}],"version-history":[{"count":0,"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/posts\/1009\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/media\/1021"}],"wp:attachment":[{"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/media?parent=1009"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/categories?post=1009"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.linuxfordevices.com\/wp-json\/wp\/v2\/tags?post=1009"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}