How to Download and Extract URLs from Sitemaps using the Command Line

Published on February 28, 2024

If you work on SEO-related tasks and are looking for sitemaps or URLs in a sitemap, this article contains a list of Linux / Unix commands to make your job easy.

Commands to make your SEO-related tasks easier Commands to make your SEO-related tasks easier

What is an XML Sitemap?

A sitemap is like a map. It helps you how to follow a process or how to follow a direction. Search engines like Google use sitemaps to help navigate through a website in a more structured way. A sitemap is an XML file that contains a list of all or most important URLs in that website.

This is an example of the sitemap residing on our other website

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="" xmlns:xsi="" xsi:schemaLocation="">

The Googlebot constantly crawls the Internet and looks for updates in webpages. When it finds a sitemap, it categorizes and stores [indexes] each page in its database.

If you organize the webpages of your website neatly in a sitemap, the Googlebot can understand your website better, crawl more effiently and index the pages faster. The interval or frequency with which Googlebot crawls and indexes your webpages varies.

Until now, I've been saying Googlebot, but Googlebot is only one of the gazillion spiders on the Internet. There are several others, including spiders and bots from Bing, Yahoo, Yandex, Baidu and others, that use sitemaps to index your information.

There are two kinds of sitemaps - XML sitemaps and HTML sitemaps. HTML sitemaps are simple webpages that point to other webpages in that website. XML sitemaps are text files that contain a list of URLs on your website.

There are other sitemaps such as RSS or Atom feeds, which we have on this website, as well as text sitemaps such as urllist.txt, where you have one URL per line. This is the simplest type of sitemap.

What do I need to get a sitemap from the command line?

You will use curl and wget to get sitemaps from other websites. You need to use a Terminal for this. MacOS and Linux computers have them pre-installed. On Windows, you have to manually install curl and wget.

Most of these commands are meant for MacOS and Linux/Unix, so if your program does not work on Windows, you might want to consider installing a Linux emulator or virtual machine with Linux. You will thank me later.

Download an XML sitemap and print the contents

For our example, let us download the sitemap from It is located at

Using curl:

curl -sL


<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="sitemap.xsl"?><sitemapindex xmlns=""><sitemap><loc></loc><lastmod>2020-06-25T14:45:56-07:00</lastmod></sitemap><sitemap><loc></loc><lastmod>2024-01-04T09:04:18-08:00</lastmod></sitemap><sitemap><loc></loc><lastmod>2024-02-27T06:46:35-08:00</lastmod></sitemap><sitemap><loc></loc><lastmod>2023-12-06T02:23:47-08:00</lastmod></sitemap><sitemap><loc></loc><lastmod>2024-01-11T15:27:10-08:00</lastmod></sitemap><sitemap><loc></loc><lastmod>2023-09-11T15:22:31-07:00</lastmod></sitemap><sitemap><loc></loc><lastmod>2023-12-04T10:56:30-08:00</lastmod></sitemap>

In curl, the -s option is to keep it silent (without the progress information), and the -L option is to follow any redirects.

Using wget:

wget -q -O-

In wget, the -q option is to keep it quiet, and -O- option is to get the file and print the result on STDOUT. Instead of -O-, if you use -O followed by a filename, it will save the output as a file with that filename.

Extract the URLs from sitemap.xml

To extract the URLs only, we first need to see where the URLs are located in the sitemap.

This is a sample:


Using curl with grep and sed, we can get the URLs with this command:

curl -sL |  grep -o "<loc>[^<]*" | sed -e 's/<[^>]*>//g'

OUTPUT (first few results):

Extract the URLs from sitemap.xml.gz

What if the sitemap is saved as a gzipped file? In that case, we pipe it with gunzip.

As an example, we will read URLs from the gzipped sitemap at

curl -s | gunzip | grep -o "<loc>[^<]*" | sed -e 's/<[^>]*>//g'

OUTPUT (first few results):

Extract the URLs from a sitemap and save into a text file

If you want to save the URLs from and store them in a file called nps.txt:

curl -s | grep -o "<loc>[^<]*" | sed -e 's/<[^>]*>//g' > nps.txt

It gets saved in nps.txt. To verify:

$ cat nps.txt

Use our online XML Sitemap Extractor tool

You can just use our online XML Sitemap Extractor if you do not want to do all this.

We have done all the hard work for you. In this online tool, you have the option to save your result as a text file or copy/paste into the clipboard.


There is a lot more you can do with curl, wget, grep, awk, sed and other Linux utilities.

You may bookmark this page if you work on SEO. We will keep updating this page depending on your feedback. Please leave a comment or contact me via email if you have any questions or comments. Thank you for reading this article.

Related Posts

If you have any questions, please contact me at arulbOsutkNiqlzziyties@gNqmaizl.bkcom. You can also post questions in our Facebook group. Thank you.

Disclaimer: Our website is supported by our users. We sometimes earn affiliate links when you click through the affiliate links on our website.

Published on February 28, 2024