Learning Outcomes
The Screaming Frog SEO Spider is a website crawler that helps you improve onsite SEO, by extracting data & auditing for common SEO issues. Download & crawl 500 URLs for free, or buy a licence to remove the limit & access advanced features. Free Vs Paid Download. Jul 03, 2020. May 31, 2018. Screaming Frog SEO Spider Crack is the name of new and powerful tools in the fields of SEO (Search Engine Optimization) and website ranking for your website.Probably you’re familiar with the concepts of SEO. SEO is a collection of activities and techniques that we perform on own website content to get a better position in the results of the search engines such as Google, Bing, Yahoo.
- To learn how to run Screaming Frog using the command line for Mac & Windows.
Screaming Frog (SF) is a fantastic desktop crawler that’s available for Windows, Mac and Linux.
This tutorial is separated across multiple blog posts:
You’ll learn not only how to easily automate SF crawls, but also how to automatically wrangle the .csv data using Python.
Then we’ll create a data pipeline which will push all of the data into BigQuery to view it in Google Data Studio.
Finally, we’ll step up the automation and upload our scripts to a Google Cloud virtual machine:
- The virtual machine will turn on every day at a specific time.
- Several python scripts will automatically execute and will perform the following:
- A list of domains from a .txt file / via environment variables will be sequentially crawled.
- We’ll wrangle the data and save it to BigQuery.
- Then the virtual machine will shut down after all of the domains have either completed or failed.
- The daily data will then be available via Google Data Studio.
In this blog post, you’ll learn how to automate screaming frog with the command line!
The Command Line
Many daily acitivites such as opening/closing programs or requesting a web page can be completely automated via the command line.
If you’d like a detailed overview of the different types of commands you can use on your computer, I’d recommend viewing these guides:
Part 1 – Screaming Frog CLI
Mac Terminal + Screaming Frog
This part of the tutorial is only for Mac OSX users, therefore if you’re using Windows, visit the Windows section instead here.
Opening Terminal
Firstly you will need to open terminal which can be done by the following commands:
- ⌘ Cmd + Space
- Type terminal
- Press enter
Useful Linux Commands:
Several useful commands include:
- cd ~ (cd allows you to change directory)
- pwd (pwd prints your current working directory)
- mkdir folder (mkdir allows you to create folders)
- clear (clear removes any previous text from your terminal)
How To Open Screaming Frog With The Terminal
Assuming that Screaming Frog is installed in the default location, you can run Screaming Frog with:
How To Create An Alias In Terminal
Now let’s create a shortcut for the command that we just ran, this is called an alias.
All of your alias’ need to be created inside of either:
NB: You can easily find out whether you’re on a new Mac terminal with:
If it says /bin/zsh, then you will need to update the .zsrc file instead.
You can edit this file with either:
We’ll create an alias called sf that will automatically run the Screaming Frog Application.
Add the following to either your .bash_profile or .zsrc file:
Then hit:
Now close your terminal and reload it using:
- ⌘ Cmd + Space
- Type terminal
- Press enter
Now type:
As you can see, we’ve now successfully created a shortcut for loading Screaming Frog.
How To See All Of The Commands:
You can easily get a list of all of the available commands with:
How To Run A Crawl
If you want to open Screaming Frog and crawl a website use this:
For example if you wanted to crawl https://sempioneer.com:
You can use any URL or domain name that you’d like and the above commands will:
- Open your Screaming Frog Application.
- Crawl the desired domain.
How To Run Screaming Frog Headless (Without A Graphical User Interface)
It’s possible for us to execute Screaming Frog without a graphical user interface, by adding –headless:
Additionally we can save the crawl by adding –save-crawl:
NB: You will need to purchase a license for executing Screaming Frog with the –save-crawl functionality.
An example would be:
How To Export Data
Instead of saving a crawl, we’ll export the data to a specific folder by adding two extra arguments:
- Locate your username by typing pwd in Terminal and excluding the $. For example my username is: jamesaphoenix
- Go back to either your .bash_profile file or .zshrc file and create a new alias:
Then add the following alias to the bottom of your file:
Please remember to replace {username} with your true username!
Save your file and load up a new Terminal window and enter:
You’ll hopefully have a time-stamped folder on your desktop and inside of that folder, you’ll see a file called crawl.seospider
How To Export A Single Tab
As well as doing a crawl, its possible to automatically extract the .csv files.
You can export tabs, which are these:
For example if we wanted to crawl the website and export a .csv file with all of the images without alt text, we would do the following:
The snytax for exporting from tabs follows a generic structure:
How To Export Multiple Tabs
You can also export multiple files at once by simplying separating them by a comma:
How to install rpu software download. In order to see the parent:child relationships for the tabs, simply look at how they nested inside of the right panel of Screaming Frog: Kodak easyshare g600 driver download.
Let’s simulataneously extract duplicated title tags, missing title tags and meta descriptions:
For example my desired URL + username is phoenixandpartners.co.uk + jamesaphoenix:
How To Export Reports
Also you can export reports:
The syntax is similar and uses the parent:child structure, however if there is no child then only the parent name is required.
Here’s an example where only the parent level is required:
Here’s an example where the parent:child structure is required:
How To Perform Bulk Exports
We can also extract the bulk exports too!
An example where only a parent level is required:
An example where the parent:child structure is required:
How To Create A Sitemap
If you’re using a content management system such as WordPress, then I’d recommend using a plugin such as Yoast / TheSEOFramework / RankMath to automatically build your sitemap.xml files.
However if you’re working with a headless CMS or a static website, you can automatically create sitemaps with Screaming Frog:
How To Create Configuration Files
Configuration files allow you to tune the crawl speed, choose specific user agents, crawl or not crawl specific pages and many more features!
After changing the configuration inside of Screaming Frog, you can save it as a configuration file.
We can then apply that configuration file to a headless terminal screaming frog crawl via the terminal.
Create Your Config File:
First open up Screaming Frog and go to Configuration > Spider > Extraction > Structured Data:
Then tick the following checkboxes:
- JSON-LD
- Microdata
- RDFa
Click OK.
Then you’ll need to save the configuration file by:
File > Configuration > Save As
I will choose to call my file custom_crawl.seospiderconfig
Make sure to save it under a new folder in your desktop called config
How To Crawl With A Config File
Let’s crawl the example site with our newly created configuration file:
So in my example it would be:
How To Crawl Text Files
It’s possible to run Screaming Frog in list mode via the terminal.
Simply create a .txt file with a list of URLs that you’d like to crawl.
These can be from a single website or many websites. Save this .txt to your desktop:
The extra argument used here is –crawl-list like so:
My example looks like this:
We’ve finished the Mac section, I hope that this post provides you with a good overview on how to get started with automating Screaming Frog.
Automation is powerful and I encourage you to practice your new found super powers!
In the next post, you’ll learn how to automatically wrangle your Screaming Frog data with Python + Pandas.
Windows Command Prompt + Screaming Frog
This section of the post is for Windows Users, if you’re using a Mac, click here.
How To use The Command Prompt
Firstly type in your Windows search bar Command Prompt :
After opening your Command Prompt it should look similar to this:
Now that your command prompt is running type start . and hit enter
Creating Shortcuts In Windows
We’ll create shortcuts that you can run via command crompt to automate Screaming Frog!
Let’s store all of these shortcuts in a folder on our desktop.
Additionally we’ll create a shortcut that will navigate to this specific shortcuts folder!
- Create a new folder on your desktop called screaming-frog-commands
- Go to your desktop, right click and then select Shortcut.
This will open a new window:
Change the following command so that the {username} is replaced with your actual username:
Then enter the command inside of the Type the location of the item , click next and save the shortcut as sf
Mac os x tablet app. An icon will have been saved onto your desktop.
After you click the icon, the shortcut that you entered above will be executed which will:
- Open command prompt.
- Navigate to the screaming-frog-commands folder on your desktop.
How To Open Screaming Frog
Next we need to figure out whether you’re using a 32bit or 64bit version of Windows.
Try to run the 32-bit version in Command Prompt:
If you receive this message “The system cannot find the path specified”, then you’ll need to use the 64-bit command:
To open Screaming Frog from your current working directory, type ScreamingFrogSEOSpiderCli.exe
Hopefully you’ll now have just opened Screaming Frog from the command line ?!
Close your Command Prompt and open the sf shortcut that we created earlier on. Then open this directory in the file explorer with:
From this folder, let’s create a new command line shortcut to speed up the process:
NB: If you’re running on a 32-bit version of Windows, simply change
“C:Program Files (x86)Screaming Frog SEO Spider“ to “C:Program FilesScreaming Frog SEO Spider“
Name this shortcut open-sf
![Screaming frog in water Screaming frog in water](/uploads/1/2/6/8/126802171/490752556.jpg)
Then close the Command Prompt and File Exporer, and navigate to your desktop.
Run the sf shortcut and enter open-sf.link
This should’ve opened Screaming Frog.
Basically how this works is:
- When we open our sf shortcut, we navigate into the screaming-frog-commands folder.
- Then there is an open shortcut called open.lnk. We then ran this by entering its name open-sf.lnk
So far we have the following shortcuts:
- “C:WindowsSystem32cmd.exe” /k (This opens Command Prompt)
- cd “C:Program Files (x86)Screaming Frog SEO Spider” & ScreamingFrogSEOSpiderCli.exe (This navigates to a specific folder and executes the Screaming Frog application).
Notice that the & https://siteprofit633.weebly.com/office-para-mac-download-espanol.html. symbol, which ensures that the first command is executed, then the second command is executed afterwards inside of the Command Prompt.
How To Run A Crawl
Now close Screaming Frog and the Command Prompt. Re-run your sf shortcut. In the future sections we’ll be adding on more arguments to our shortcut (open.lnk) file:
Enter:
For example if you wanted to crawl https://phoenixandpartners.co.uk/ then it would be:
Let’s create another shortcut in the screaming-frog-commands folder and call it crawl:
Also notice above how the last argument is –crawl , which means we will only need to pass a URL into this shortcut for it to successfully execute.
- Close everything down.
- Open your sf shortcut.
- Then enter:
This will then crawl from the above URL all via the shortcut!
How To Run Screaming Frog Headless (Without A Graphical User Interface)
We are going to add several extra arguments to our existing crawl shortcut:
It’s possible for us to execute Screaming Frog without a graphical user interface (GUI), by adding the –headless argument:
Additionally we can save the crawl by adding –save-crawl:
NB: You will need to purchase a license for executing Screaming Frog with the –save-crawl functionality.
How To Save A Crawl:
We can also save our folders to a specific folder with the –output-folder argument . Additionally we can make sure that the created folder has a unqiue name by adding the –timestamped-output argument.
Let’s see all of the commands in action without any shortcuts to easily see what’s happening:
Then save this as a shortcut called save-screaming-frog-crawl
You can now easily access this by:
Now that we’ve covered the basic crawling applications, let’s explore how to export tabs, reports and bulk exports!
How To Export A Single Tab
As well as doing a crawl, its possible to automatically extract the .csv files.
You can export tabs, which are the following:
For example if we wanted to crawl the website and export a .csv file with all of the images without alt text, we would do the following:
The snytax for exporting from tabs follows a generic structure:
Exporting Multiple Tabs
You can easily export multiple tabs by separating the multiple tabs with a comma. Let’s simulataneously extract duplicated title tags, missing title tags and meta descriptions:
To see the parent:child relationships for the tabs, simply look at how they nested on the right panel of Screaming Frog:
For example my username is jamesaphoenix:
How To Export Reports
Download Screaming Frog
Also you can export reports:
The syntax is similar and uses the parent:child structure, however if there is no child then only the parent name is required.
Here’s an example where only the parent level is required:
Here’s an example where the parent:child structure is required:
How To Perform Bulk Exports
We can also extract the bulk exports too!
An example where only a parent level is required:
An example where the parent:child structure is required:
How To Create A Sitemap
If you’re using a content management system such as WordPress, then I’d recommend using a plugin such as Yoast/TheSEOFramework/RankMath to automatically build your sitemap.xml files.
However if you’re working with a headless CMS or a static website, you can automatically create sitemaps with Screaming Frog:
How To Create Configuration Files
Configuration files allow you to tune the crawl speed, choose specific user agents, crawl or not crawl specific pages and many more features!
After changing the configuration inside of Screaming Frog, you can save it as a configuration file.
We can then apply that configuration file to a headless screaming frog crawl via the terminal.
Create Your Config File:
First open up Screaming Frog and go to Configuration > Spider > Extraction > Structured Data:
Then tick the following checkboxes:
- JSON-LD
- Microdata
- RDFa
Click OK.
Then you’ll need to save the configuration file by:
File > Configuration > Save As
I will choose to call my file custom_crawl.seospiderconfig
Make sure to save it under a new folder in your desktop called config
How To Crawl With A Config File
Let’s crawl the example site with our newly created configuration file:
So in my example it would be:
How To Crawl Text Files
Screaming Frog
It’s possible to run Screaming Frog in list mode via the terminal.
Simply create a .txt file with a list of URLs that you’d like to crawl. These can be from a single website or many websites.
Save this .txt to your desktop:
The extra argument used here is –crawl-list like so: Ninne pelladatha serial.
My example looks like this:
We’ve finished the Windows section, I hope that this post provides you with a good overview on how to get started with automating Screaming Frog.
Screaming Frog In Water
Automation is powerful and I encourage you to practice your new found super powers!
Screaming Frog Us
In the next post, you’ll learn how to automatically wrangle your Screaming Frog data with Python + Pandas.