{"@attributes":{"version":"2.0"},"channel":{"title":"ScrapingBee \u2013 The Best Web Scraping API","link":"https:\/\/www.scrapingbee.com\/","description":"Recent content on ScrapingBee \u2013 The Best Web Scraping API","generator":"Hugo","language":"en-us","lastBuildDate":"Fri, 17 Apr 2026 00:00:00 +0000","item":[{"title":"How to handle infinite scroll pages in C#","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-c\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-c\/","description":"<p>Nowadays, most websites use different methods and techniques to decrease the load and data served to their clients\u2019 devices. One of these techniques is the infinite scroll.<\/p>\n<p>In this tutorial, we will see how we can scrape <a href=\"https:\/\/www.scrapingbee.com\/blog\/infinite-scroll-puppeteer\/\" >infinite scroll<\/a> web pages using a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >js_scenario<\/a>, specifically the\u00a0<code>scroll_y<\/code>\u00a0and\u00a0<code>scroll_x<\/code>\u00a0features. And we will use\u00a0<a href=\"https:\/\/demo.scrapingbee.com\/infinite_scroll.html\" >this page<\/a>\u00a0as a demo. Only 9 boxes are loaded when we first open the page, but as soon as we scroll to the end of it, we will load 9 more, and that will keep happening each time we scroll to the bottom of the page.<\/p>"},{"title":"Adding items to an eCommerce shopping cart","link":"https:\/\/www.scrapingbee.com\/tutorials\/adding-items-to-an-ecommerce-shopping-cart\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/adding-items-to-an-ecommerce-shopping-cart\/","description":"<p>Here is a quick tutorial on how you may add items to a shopping cart on eCommerce websites using ScrapingBee API via a JS scenario on Python.<\/p>\n<p>1. You would need to identify any CSS selector that uniquely identifies the button or 'add to cart' element you wish to click. This can be done via the inspect element option on any browser, more details can be found on this tutorial:<br><a href=\"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-css-selectors-using-chrome\/\" >https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-css-selectors-using-chrome\/<\/a><\/p>"},{"title":"Data extraction in C#","link":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-c\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-c\/","description":"<p>One of the most important features of ScrapingBee, is the ability to extract exact data without need to post-process the request\u2019s content using external libraries.<\/p>\n<p>We can use this feature by specifying an additional parameter with the name\u00a0<code>extract_rules<\/code>. We specify the label of elements we want to extract, their CSS Selectors and ScrapingBee will do the rest!<\/p>\n<p>Let\u2019s say that we want to extract the title &amp; the subtitle of the\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction documentation page<\/a>. Their CSS selectors are\u00a0<code>h1<\/code>\u00a0and\u00a0<code>span.text-[20px]<\/code>\u00a0respectively. To make sure that they\u2019re the correct ones, you can use the JavaScript function:\u00a0<code>document.querySelector(&quot;CSS_SELECTOR&quot;)<\/code>\u00a0in that page\u2019s developer tool\u2019s console.<\/p>"},{"title":"How to extract content from a Shadow DOM","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-content-from-a-shadow-dom\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-content-from-a-shadow-dom\/","description":"<p>Certain websites may hide all of their page content inside a shadow root, which makes scraping them quite challenging. This is because most scrapers cannot directly access HTML content embedded within a shadow root. Here is a guide on how you can extract such data via ScrapingBee.<\/p>\n<hr>\n<p>We will use a quite popular site as an example: <a href=\"http:\/\/www.msn.com\/\" >www.msn.com<\/a><br>If you inspect any article on this page, let\u2019s use this <a href=\"https:\/\/www.msn.com\/en-us\/lifestyle\/lifestyle-buzz\/kate-middleton-and-prince-william-s-new-home-forest-lodge-almost-went-to-a-different-royal-couple\/ar-AA1KNPyI?ocid=hpmsn&amp;amp;cvid=68a6f93e8ed04131b40f3dc49ecfba6c&amp;amp;ei=18\" >one<\/a>. You can see that all of its contents are inside a shadow root:<br><img src=\"https:\/\/www.scrapingbee.com\/uploads\/image-11.png\" alt=\"\"><\/p>"},{"title":"How to extract curl requests from Chrome","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-curl-requests-from-chrome\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-curl-requests-from-chrome\/","description":"<ol>\n<li>Open the\u00a0<a href=\"https:\/\/developer.chrome.com\/docs\/devtools\/network\/\" >Network<\/a>\u00a0tab in the\u00a0<a href=\"https:\/\/developer.chrome.com\/docs\/devtools\/overview\/\" >DevTools<\/a><\/li>\n<li>Right click (or Ctrl-click) a request<\/li>\n<li>Click &quot;Copy&quot; \u2192 &quot;Copy as cURL&quot;<\/li>\n<li>You can now paste it in the relevant\u00a0<a href=\"https:\/\/www.scrapingbee.com\/curl-converter\/\" >curl converter<\/a>\u00a0to translate it in the language you want<\/li>\n<\/ol>\n<img src=\"https:\/\/www.scrapingbee.com\/uploads\/cleanshot-2022-08-08-at-16-54-342x.png\" width=\"984\" height=\"1228\" alt=\"Screenshot\"\/>"},{"title":"How to extract curl requests from Firefox","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-curl-requests-from-firefox\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-curl-requests-from-firefox\/","description":"<ol>\n<li>Open the\u00a0<a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Tools\/Network_Monitor\" >Network Monitor<\/a>\u00a0tab in the\u00a0<a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Tools\" >Developer Tools<\/a><\/li>\n<li>Right click (or Ctrl-click) a request<\/li>\n<li>Click &quot;Copy&quot; \u2192 &quot;Copy as cURL&quot;<\/li>\n<li>You can now paste it in the relevant\u00a0<a href=\"https:\/\/www.scrapingbee.com\/curl-converter\/\" >curl converter<\/a>\u00a0to translate it in the language you want<\/li>\n<\/ol>\n<img src=\"https:\/\/www.scrapingbee.com\/uploads\/cleanshot-2022-08-08-at-17-01-292x.png\" width=\"2314\" height=\"676\" alt=\"Screenshot\"\/>"},{"title":"How to extract curl requests from Safari","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-curl-requests-from-safari\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-curl-requests-from-safari\/","description":"<ol>\n<li>Open the\u00a0<a href=\"https:\/\/support.apple.com\/en-us\/guide\/safari-developer\/dev1f3525e58\/mac\" >Network<\/a>\u00a0tab in the\u00a0<a href=\"https:\/\/support.apple.com\/en-us\/guide\/safari-developer\/dev073038698\/mac\" >Developer Tools<\/a><\/li>\n<li>Right click (or Ctrl-click or two-finger click) a request<\/li>\n<li>Click &quot;Copy as cURL&quot; in the dropdown menu<\/li>\n<li>You can now paste it in the relevant\u00a0<a href=\"https:\/\/www.scrapingbee.com\/curl-converter\/\" >curl converter<\/a>\u00a0to translate it in the language you want<\/li>\n<\/ol>\n<img src=\"https:\/\/www.scrapingbee.com\/uploads\/cleanshot-2022-08-08-at-16-48-052x.png\" width=\"1508\" height=\"446\" alt=\"Screenshot\"\/>"},{"title":"How to remove any element from the HTML response","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-remove-any-element-from-the-html-response\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-remove-any-element-from-the-html-response\/","description":"<p>Sometimes you may need to remove specific HTML elements from the page's content, either to get cleaner results for your <a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction rules<\/a>, or to simply delete unnecessary content from your response.<\/p>\n<p>To achieve that using ScrapingBee, you can use a<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >JavaScript Scenario<\/a>, with an evaluate instruction and execute this custom JS code:<\/p>\n<pre tabindex=\"0\"><code>document.querySelectorAll(&#34;ELEMENT-CSS-SELECTOR&#34;).forEach(function(e){e.remove();});\u200b\n<\/code><\/pre><p>For example, to remove all of the &lt;style&gt; elements from the response, you can use this JavaScript Scenario:<\/p>"},{"title":"Scrolling and loading more content via a JS scenario","link":"https:\/\/www.scrapingbee.com\/tutorials\/scrolling-and-loading-more-content-via-a-js-scenario\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/scrolling-and-loading-more-content-via-a-js-scenario\/","description":"<p>Certain websites may require you to scroll in order to load more results on the page or within a specific element. <br><br><img src=\"https:\/\/www.scrapingbee.com\/uploads\/image-1.png\" alt=\"\"><br><br>This is a quick guide on how to achieve different scrolling behaviors using JavaScript scenario.<br><strong>*Note that the JavaScript Scenario has a maximum execution time limit of 40 seconds. Requests exceeding this limit will result in a timeout:<\/strong> <a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/#timeout\" ><strong>https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/#timeout<\/strong><\/a><\/p>\n<hr>\n<h3 id=\"1-scrolling-a-specific-element\">1. Scrolling a Specific Element<\/h3>\n<p>Some page elements, such as tables or graphs, may contain content that only becomes visible after scrolling. <br><img src=\"https:\/\/www.scrapingbee.com\/uploads\/image-8.png\" alt=\"\"><\/p>"},{"title":"Scrolling via page API","link":"https:\/\/www.scrapingbee.com\/tutorials\/scrolling-via-page-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/scrolling-via-page-api\/","description":"<p>Some pages load more content only after you click \u201cLoad more results\u201d or scroll and wait. In reality, the page often fetches additional results from its own API. If ScrapingBee can\u2019t load those results, you can target the site\u2019s API URL directly. <br><br>Here\u2019s how to do that using this URL as an example: <a href=\"https:\/\/www.reuters.com\/technology\" >https:\/\/www.reuters.com\/technology<\/a><br><img src=\"https:\/\/www.scrapingbee.com\/uploads\/image-2.png\" alt=\"\"><br><strong>*Note that the JavaScript Scenario has a maximum execution time limit of 40 seconds. Requests exceeding this limit will result in a timeout:<\/strong> <a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/#timeout\" ><strong>https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/#timeout<\/strong><\/a><\/p>"},{"title":"Make concurrent requests in C#","link":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-c\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-c\/","description":"<p>Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.<\/p>\n<p>The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.<\/p>\n<pre tabindex=\"0\"><code>using System;\nusing System.IO;\nusing System.Net;\nusing System.Web;\nusing System.Threading;\n\nnamespace test {\n class test{\n\n private static string BASE_URL = &#34;https:\/\/app.scrapingbee.com\/api\/v1\/?&#34;;\n private static string API_KEY = &#34;YOUR-API-KEY&#34;;\n\n public static string Get(string uri)\n {\n HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);\n request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;\n\n using(HttpWebResponse response = (HttpWebResponse)request.GetResponse())\n using(Stream stream = response.GetResponseStream())\n using(StreamReader reader = new StreamReader(stream))\n {\n return reader.ReadToEnd();\n }\n }\n\n public static bool Scrape(string uri, string path) {\n Console.WriteLine(&#34;Scraping &#34; + uri);\n var query = HttpUtility.ParseQueryString(string.Empty);\n query[&#34;api_key&#34;] = API_KEY;\n query[&#34;url&#34;] = uri;\n string queryString = query.ToString(); \/\/ Transforming the URL queries to string\n\n string output = Get(BASE_URL+queryString); \/\/ Make the request\n try {\n using (StreamWriter sw = File.CreateText(path))\n {\n sw.Write(output);\n }\n return true;\n } catch {return false;}\n }\n\n public static void Main(string[] args) {\n Thread thread1 = new Thread(() =&gt; Scrape(&#34;https:\/\/scrapingbee.com\/blog&#34;, &#34;.\/scrapingbeeBlog.html&#34;));\n Thread thread2 = new Thread(() =&gt; Scrape(&#34;https:\/\/scrapingbee.com\/documentation&#34;, &#34;.\/scrapingbeeDocumentation.html&#34;));\n thread1.Start();\n thread2.Start();\n\n }\n }\n}\n<\/code><\/pre>"},{"title":"Make concurrent requests in Go","link":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-go\/","description":"<p>Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.<\/p>\n<p>The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.<\/p>\n<p>Making concurrent requests in GoLang is as easy as adding a \u201cgo\u201d keyword before our scraping functions! The code below will make two concurrent requests to ScrapingBee\u2019s pages, and save the content in an HTML file.<\/p>"},{"title":"Make concurrent requests in NodeJS","link":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-nodejs\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-nodejs\/","description":"<p>Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.<\/p>\n<p>The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.<\/p>\n<p>Making concurrent requests in NodeJS is very straightforward using Cluster module. The code below will make two concurrent requests to ScrapingBee\u2019s pages, and save the content in an HTML file.<\/p>"},{"title":"Make concurrent requests in PHP","link":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-php\/","description":"<p>Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.<\/p>\n<p>The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.<\/p>\n<p>Making concurrent requests in PHP is as easy as creating threads for our scraping functions! The code below will make two concurrent requests to ScrapingBee\u2019s pages and display the HTML content of each page:<\/p>"},{"title":"Make concurrent requests in Python","link":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-python\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-python\/","description":"<p>Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.<\/p>\n<p>The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> concurrent.futures\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> time\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> scrapingbee <span style=\"color:#f92672\">import<\/span> ScrapingBeeClient <span style=\"color:#75715e\"># Importing SPB&#39;s client<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>client <span style=\"color:#f92672\">=<\/span> ScrapingBeeClient(api_key<span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#39;YOUR-API-KEY&#39;<\/span>) <span style=\"color:#75715e\"># Initialize the client with your API Key, and using screenshot_full_page parameter to take a screenshot!<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>MAX_RETRIES <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">5<\/span> <span style=\"color:#75715e\"># Setting the maximum number of retries if we have failed requests to 5.<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>MAX_THREADS <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">4<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>urls <span style=\"color:#f92672\">=<\/span> [<span style=\"color:#e6db74\">&#34;http:\/\/scrapingbee.com\/blog&#34;<\/span>, <span style=\"color:#e6db74\">&#34;http:\/\/reddit.com\/&#34;<\/span>]\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">def<\/span> <span style=\"color:#a6e22e\">scrape<\/span>(url):\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">for<\/span> _ <span style=\"color:#f92672\">in<\/span> range(MAX_RETRIES):\n<\/span><\/span><span style=\"display:flex;\"><span> response <span style=\"color:#f92672\">=<\/span> client<span style=\"color:#f92672\">.<\/span>get(url, params<span style=\"color:#f92672\">=<\/span>{<span style=\"color:#e6db74\">&#39;screenshot&#39;<\/span>: <span style=\"color:#66d9ef\">True<\/span>}) <span style=\"color:#75715e\"># Scrape!<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">if<\/span> response<span style=\"color:#f92672\">.<\/span>ok: <span style=\"color:#75715e\"># If we get a successful request<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">with<\/span> open(<span style=\"color:#e6db74\">&#34;.\/&#34;<\/span><span style=\"color:#f92672\">+<\/span>str(time<span style=\"color:#f92672\">.<\/span>time())<span style=\"color:#f92672\">+<\/span><span style=\"color:#e6db74\">&#34;screenshot.png&#34;<\/span>, <span style=\"color:#e6db74\">&#34;wb&#34;<\/span>) <span style=\"color:#66d9ef\">as<\/span> f:\n<\/span><\/span><span style=\"display:flex;\"><span> f<span style=\"color:#f92672\">.<\/span>write(response<span style=\"color:#f92672\">.<\/span>content) <span style=\"color:#75715e\"># Save the screenshot in the file &#34;screenshot.png&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">break<\/span> <span style=\"color:#75715e\"># Then get out of the retry loop<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">else<\/span>: <span style=\"color:#75715e\"># If we get a failed request, then we continue the loop<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> print(response<span style=\"color:#f92672\">.<\/span>content)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> concurrent<span style=\"color:#f92672\">.<\/span>futures<span style=\"color:#f92672\">.<\/span>ThreadPoolExecutor(max_workers<span style=\"color:#f92672\">=<\/span>MAX_THREADS) <span style=\"color:#66d9ef\">as<\/span> executor:\n<\/span><\/span><span style=\"display:flex;\"><span> executor<span style=\"color:#f92672\">.<\/span>map(scrape, urls)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"Make concurrent requests in Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/make-concurrent-requests-in-ruby\/","description":"<p>Our API is designed to allow you to have multiple concurrent scraping operations. That means you can speed up scraping for hundreds, thousands or even millions of pages per day, depending on your plan.<\/p>\n<p>The more concurrent requests limit you have the more calls you can have active in parallel, and the faster you can scrape.<\/p>\n<p>Making concurrent requests in Ruby is as easy as creating threads for our scraping functions! The code below will make two concurrent requests to ScrapingBee\u2019s pages and display the HTML content of each page:<\/p>"},{"title":"Retry failed requests in C#","link":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-c\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-c\/","description":"<p>For most websites, your first requests will always be successful, however, it\u2019s inevitable that some of them will fail. For these failed requests, the API will return a 500 status code and won\u2019t charge you for the request.<\/p>\n<p>In this case, we can make our code retry to make the requests until we reach a maximum number of retries that we set:<\/p>\n<pre tabindex=\"0\"><code>using System;\nusing System.IO;\nusing System.Net;\nusing System.Web;\nusing System.Collections.Generic;\n\nnamespace test {\n class test{\n\n private static string BASE_URL = @&#34;https:\/\/app.scrapingbee.com\/api\/v1\/?&#34;;\n private static string API_KEY = &#34;YOUR-API-KEY&#34;;\n\n public static Dictionary&lt;string, dynamic&gt; Get(string uri)\n {\n HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);\n request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;\n\n using(HttpWebResponse response = (HttpWebResponse)request.GetResponse())\n using(Stream stream = response.GetResponseStream())\n using(StreamReader reader = new StreamReader(stream))\n {\n Dictionary&lt;string, dynamic&gt; OutputList = new Dictionary&lt;string, dynamic&gt;();\n OutputList.Add(&#34;StatusCode&#34;, response.StatusCode);\n OutputList.Add(&#34;Response&#34;, reader.ReadToEnd());\n return OutputList;\n }\n }\n\n public static void Main(string[] args) {\n\n var query = HttpUtility.ParseQueryString(string.Empty);\n query[&#34;api_key&#34;] = API_KEY;\n query[&#34;url&#34;] = @&#34;https:\/\/scrapingbee.com\/blog&#34;;\n string queryString = query.ToString(); \/\/ Transforming the URL queries to string\n\n const int MAX_RETRIES = 5; \/\/ Set the maximum number of retries we&#39;re looking to execute\n\n for (int i = 0; i &lt; MAX_RETRIES; i++) {\n try {\n\n var output = Get(BASE_URL+queryString); \/\/ Make the request\n var StatusCode = output[&#34;StatusCode&#34;];\n var content = output[&#34;Response&#34;];\n\n if (StatusCode == HttpStatusCode.OK) { \/\/ If the response is 200\/OK\n string path = @&#34;.\/ScrapingBeeBlog.html&#34;; \/\/ Output file\n \/\/ Create a file to write to.\n using (StreamWriter sw = File.CreateText(path))\n {\n sw.Write(output);\n }\n Console.WriteLine(&#34;Done!&#34;);\n break;\n } else {\n Console.WriteLine(&#34;Failed request; Status code: &#34; + StatusCode);\n Console.WriteLine(&#34;Retrying...&#34;);\n }\n\n } catch (Exception ex) {\n Console.WriteLine(&#34;An error has occured:&#34; + ex.Message);\n Console.WriteLine(&#34;Retrying...&#34;);\n }\n\n }\n\n }\n }\n}\n<\/code><\/pre>"},{"title":"Retry failed requests in Go","link":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-go\/","description":"<p>For most websites, your first requests will always be successful, however, it\u2019s inevitable that some of them will fail. For these failed requests, the API will return a 500 status code and won\u2019t charge you for the request.<\/p>\n<p>In this case, we can make our code retry to make the requests until we reach a maximum number of retries that we set:<\/p>\n<pre tabindex=\"0\"><code>package main\n\nimport (\n &#34;fmt&#34;\n &#34;io&#34;\n &#34;net\/http&#34;\n &#34;os&#34;\n)\n\nconst API_KEY = &#34;YOUR-API-KEY&#34;\nconst SCRAPINGBEE_URL = &#34;https:\/\/app.scrapingbee.com\/api\/v1&#34;\n\nfunc save_page_to_html(target_url string, file_path string) (interface{}, error) { \/\/ Using sync.Waitgroup to wait for goroutines to finish\n\n req, err := http.NewRequest(&#34;GET&#34;, SCRAPINGBEE_URL, nil)\n if err != nil {\n return nil, fmt.Errorf(&#34;Failed to build the request: %s&#34;, err)\n }\n\n q := req.URL.Query()\n q.Add(&#34;api_key&#34;, API_KEY)\n q.Add(&#34;url&#34;, target_url)\n req.URL.RawQuery = q.Encode()\n\n client := &amp;http.Client{}\n resp, err := client.Do(req)\n if err != nil {\n return nil, fmt.Errorf(&#34;Failed to request ScrapingBee: %s&#34;, err)\n }\n defer resp.Body.Close()\n\n if resp.StatusCode != http.StatusOK {\n return nil, fmt.Errorf(&#34;Error request response with status code %d&#34;, resp.StatusCode)\n }\n\n bodyBytes, err := io.ReadAll(resp.Body)\n\n file, err := os.Create(file_path)\n if err != nil {\n return nil, fmt.Errorf(&#34;Couldn&#39;t create the file &#34;, err)\n }\n\n l, err := file.Write(bodyBytes) \/\/ Write content to the file.\n if err != nil {\n file.Close()\n return nil, fmt.Errorf(&#34;Couldn&#39;t write content to the file &#34;, err)\n }\n err = file.Close()\n if err != nil {\n return nil, fmt.Errorf(&#34;Couldn&#39;t close the file &#34;, err)\n }\n\n return l, nil\n}\n\nfunc main() {\n\n MAX_RETRIES := 5 \/\/ Set a maximum number of retries\n\n target_url := &#34;https:\/\/www.scrapingbee.com&#34;\n\n for i := 0; i &lt; MAX_RETRIES; i++ {\n saved_screenshot, err := save_page_to_html(target_url,&#34;.\/scrapingbee.html&#34;)\n if err != nil {\n fmt.Println(err)\n fmt.Println(&#34;Retrying...&#34;)\n } else {\n fmt.Println(&#34;Done!&#34;, saved_screenshot)\n break\n }\n }\n\n}\n<\/code><\/pre>"},{"title":"Retry failed requests in PHP","link":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-php\/","description":"<p>For most websites, your first requests will always be successful, however, it\u2019s inevitable that some of them will fail. For these failed requests, the API will return a 500 status code and won\u2019t charge you for the request.<\/p>\n<p>In this case, we can make our code retry to make the requests until we reach a maximum number of retries that we set:<\/p>\n<pre tabindex=\"0\"><code>&lt;?php\n\n\/\/ Get cURL resource\n$ch = curl_init();\n\n\/\/ Set base url &amp; API key\n$BASE_URL = &#34;https:\/\/app.scrapingbee.com\/api\/v1\/?&#34;;\n$API_KEY = &#34;YOUR-API-KEY&#34;;\n\n\/\/ Set max retries:\n$MAX_RETRIES = 5;\n\n\/\/ Set parameters\n$parameters = array(\n &#39;api_key&#39; =&gt; $API_KEY,\n &#39;url&#39; =&gt; &#39;https:\/\/www.scrapingbee.com&#39; \/\/ The URL to scrape\n);\n\/\/ Building the URL query\n$query = http_build_query($parameters);\n\n\/\/ Set the URL for cURL\ncurl_setopt($ch, CURLOPT_URL, $BASE_URL.$query);\n\n\/\/ Set method\ncurl_setopt($ch, CURLOPT_CUSTOMREQUEST, &#39;GET&#39;);\n\n\/\/ Return the transfer as a string\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);\n\nfor ($i = 0; $i &lt; $MAX_RETRIES; $i++) {\n\n \/\/ Send the request and save response to $response\n $response = curl_exec($ch);\n\n \/\/ Stop if fails\n if (!$response) {\n die(&#39;Error: &#34;&#39; . curl_error($ch) . &#39;&#34; - Code: &#39; . curl_errno($ch));\n }\n\n $status_code = curl_getinfo($ch, CURLINFO_HTTP_CODE);\n echo &#39;HTTP Status Code: &#39; . $status_code . PHP_EOL;\n\n \/\/ If it&#39;s a successful request (200 or 404 status code):\n if (in_array($status_code, array(200, 404))) {\n echo &#39;Response Body: &#39; . $response . PHP_EOL;\n break;\n } else {\n echo &#39;Retrying...&#39;;\n }\n\n}\n\n\/\/ Close curl resource to free up system resources\ncurl_close($ch);\n?&gt;\n<\/code><\/pre>"},{"title":"Retry failed requests in Python","link":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-python\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-python\/","description":"<p>For most websites, your first requests will always be successful, however, it\u2019s inevitable that some of them will fail. For these failed requests, the API will return a 500 status code and won\u2019t charge you for the request.<\/p>\n<p>In this case, we can make our code retry to make the requests until we reach a maximum number of retries that we set:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> scrapingbee <span style=\"color:#f92672\">import<\/span> ScrapingBeeClient <span style=\"color:#75715e\"># Importing SPB&#39;s client<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>client <span style=\"color:#f92672\">=<\/span> ScrapingBeeClient(api_key<span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#39;YOUR-API-KEY&#39;<\/span>) <span style=\"color:#75715e\"># Initialize the client with your API Key, and using screenshot_full_page parameter to take a screenshot!<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>MAX_RETRIES <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">5<\/span> <span style=\"color:#75715e\"># Setting the maximum number of retries if we have failed requests to 5.<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">for<\/span> _ <span style=\"color:#f92672\">in<\/span> range(MAX_RETRIES):\n<\/span><\/span><span style=\"display:flex;\"><span> response <span style=\"color:#f92672\">=<\/span> client<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;http:\/\/scrapingbee.com\/blog&#34;<\/span>, params<span style=\"color:#f92672\">=<\/span>{<span style=\"color:#e6db74\">&#39;screenshot&#39;<\/span>: <span style=\"color:#66d9ef\">True<\/span>}) <span style=\"color:#75715e\"># Scrape!<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">if<\/span> response<span style=\"color:#f92672\">.<\/span>ok: <span style=\"color:#75715e\"># If we get a successful request<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">with<\/span> open(<span style=\"color:#e6db74\">&#34;.\/screenshot.png&#34;<\/span>, <span style=\"color:#e6db74\">&#34;wb&#34;<\/span>) <span style=\"color:#66d9ef\">as<\/span> f:\n<\/span><\/span><span style=\"display:flex;\"><span> f<span style=\"color:#f92672\">.<\/span>write(response<span style=\"color:#f92672\">.<\/span>content) <span style=\"color:#75715e\"># Save the screenshot in the file &#34;screenshot.png&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">break<\/span> <span style=\"color:#75715e\"># Then get out of the retry loop<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">else<\/span>: <span style=\"color:#75715e\"># If we get a failed request, then we continue the loop<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> print(response<span style=\"color:#f92672\">.<\/span>content)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"Retry failed requests in Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/retry-failed-requests-in-ruby\/","description":"<p>For most websites, your first requests will always be successful, however, it\u2019s inevitable that some of them will fail. For these failed requests, the API will return a 500 status code and won\u2019t charge you for the request.<\/p>\n<p>In this case, we can make our code retry to make the requests until we reach a maximum number of retries that we set:<\/p>\n<pre tabindex=\"0\"><code>require &#39;net\/http&#39;\nrequire &#39;net\/https&#39;\nrequire &#39;addressable\/uri&#39;\n\n# Classic (GET)\ndef send_request(user_url)\n uri = Addressable::URI.parse(&#34;https:\/\/app.scrapingbee.com\/api\/v1\/&#34;)\n api_key = &#34;YOUR-API-KEY&#34;\n uri.query_values = {\n &#39;api_key&#39; =&gt; api_key,\n &#39;url&#39; =&gt; user_url\n }\n uri = URI(uri)\n\n # Create client\n http = Net::HTTP.new(uri.host, uri.port)\n http.use_ssl = true\n http.verify_mode = OpenSSL::SSL::VERIFY_PEER\n\n # Create Request\n req = Net::HTTP::Get.new(uri)\n\n # Fetch Request\n res = http.request(req)\n\n # Return Response\n return res\nrescue StandardError =&gt; e\n puts &#34;HTTP Request failed (#{ e.message })&#34;\nend\n\nmax_retries = 5\nfor a in 1..max_retries do\n request = send_request(&#34;https:\/\/scrapingbee.com&#34;)\n if not [404, 200].include?(request.code)\n puts &#34;Request failed - Status Code: #{ request.code }&#34;\n puts &#34;Retrying...&#34;\n else\n puts &#34;Successful request - Status Code: #{ request.code }&#34;\n puts request.body\n break\n end\nend\n<\/code><\/pre><p>\u00a0<\/p>"},{"title":"Getting started with ScrapingBee and C#","link":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-c\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-c\/","description":"<p>In this tutorial, we will see how you can use ScrapingBee\u2019s API with C#, and use it to scrape web pages. As such, we will cover these topics:<\/p>\n<ul>\n<li>General structure of an API request<\/li>\n<li>Create your first API request.<\/li>\n<\/ul>\n<p>Let\u2019s get started!<\/p>\n<h3 id=\"1-general-structure-of-an-api-request\">1. General structure of an API request<\/h3>\n<p>The general structure of an API request made in C# will always look like this:<\/p>\n<pre tabindex=\"0\"><code>using System;\nusing System.IO;\nusing System.Net;\nusing System.Web;\nnamespace test {\n class test{\n\n private static string BASE_URL = @&#34;https:\/\/app.scrapingbee.com\/api\/v1\/&#34;;\n private static string API_KEY = &#34;YOUR-API-KEY&#34;;\n\n public static string Get(string url)\n {\n string uri = BASE_URL + &#34;?api_key=&#34; + API_KEY + &#34;&amp;url=&#34; + url;\n HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);\n request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;\n\n using(HttpWebResponse response = (HttpWebResponse)request.GetResponse())\n using(Stream stream = response.GetResponseStream())\n using(StreamReader reader = new StreamReader(stream))\n {\n return reader.ReadToEnd();\n }\n }\n }\n}\n<\/code><\/pre><p>And you can do whatever you want with the response variable! For example:<\/p>"},{"title":"Getting started with ScrapingBee and Go","link":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-go\/","description":"<p>In this tutorial, we will see how you can use ScrapingBee\u2019s API with GoLang, and use it to scrape web pages. As such, we will cover these topics:<\/p>\n<ul>\n<li>General structure of an API request<\/li>\n<li>Create your first API request.<\/li>\n<\/ul>\n<p>Let\u2019s get started!<\/p>\n<h3 id=\"1-general-structure-of-an-api-request\">1. General structure of an API request<\/h3>\n<p>The general structure of an API request made in Go will always look like this:<\/p>\n<pre tabindex=\"0\"><code>package main\n\nimport (\n &#34;fmt&#34;\n &#34;io\/ioutil&#34;\n &#34;net\/http&#34;\n &#34;net\/url&#34;\n)\nfunc get_request() *http.Response {\n \/\/ Create client\n client := &amp;http.Client{}\n\n my_url := url.QueryEscape(&#34;YOUR-URL&#34;) \/\/ Encoding the URL\n \/\/ Create request\n req, err := http.NewRequest(&#34;GET&#34;, &#34;https:\/\/app.scrapingbee.com\/api\/v1\/?api_key=YOUR-API-KEY&amp;url=&#34;+my_url, nil) \/\/ Create the request the request\n\n parseFormErr := req.ParseForm()\n if parseFormErr != nil {\n fmt.Println(parseFormErr)\n }\n\n \/\/ Fetch Request\n resp, err := client.Do(req)\n\n if err != nil {\n fmt.Println(&#34;Failure : &#34;, err)\n }\n\n return resp \/\/ Return the response\n}\n<\/code><\/pre><p>And you can do whatever you want with the response variable! For example:<\/p>"},{"title":"Getting started with ScrapingBee and PHP","link":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-php\/","description":"<p>In this tutorial, we will see how you can use ScrapingBee\u2019s API with PHP, and use it to scrape web pages. As such, we will cover these topics:<\/p>\n<ul>\n<li>General structure of an API request<\/li>\n<li>Create your first API request.<\/li>\n<\/ul>\n<p>Let\u2019s get started!<\/p>\n<h3 id=\"1-general-structure-of-an-api-request\">1. General structure of an API request<\/h3>\n<p>The general structure of an API request made in PHP will always look like this:<\/p>\n<pre tabindex=\"0\"><code>\n&lt;?php\n\n\/\/ Get cURL resource\n$ch = curl_init();\n\n\/\/ Set base url &amp; API key\n$BASE_URL = &#34;https:\/\/app.scrapingbee.com\/api\/v1\/?&#34;;\n$API_KEY = &#34;YOUR-API-KEY&#34;;\n\n\/\/ Set parameters\n$parameters = array(\n &#39;api_key&#39; =&gt; $API_KEY,\n &#39;url&#39; =&gt; &#39;YOUR-URL&#39; \/\/ The URL to scrape\n);\n\/\/ Building the URL query\n$query = http_build_query($parameters);\n\n\/\/ Set the URL for cURL\ncurl_setopt($ch, CURLOPT_URL, $BASE_URL.$query);\n\n\/\/ Set method\ncurl_setopt($ch, CURLOPT_CUSTOMREQUEST, &#39;GET&#39;);\n\n\/\/ Return the transfer as a string\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);\n\n\/\/ Send the request and save response to $response\n$response = curl_exec($ch);\n\n\/\/ Stop if fails\nif (!$response) {\n die(&#39;Error: &#34;&#39; . curl_error($ch) . &#39;&#34; - Code: &#39; . curl_errno($ch));\n}\n\n\/\/ Do what you want with the response here\n\n\/\/ Close curl resource to free up system resources\ncurl_close($ch);\n?&gt;\n<\/code><\/pre><p>And you can do whatever you want with the response variable! For example:<\/p>"},{"title":"Getting started with ScrapingBee and Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbee-and-ruby\/","description":"<p>In this tutorial, we will see how you can use ScrapingBee\u2019s API with Ruby, and use it to scrape web pages. As such, we will cover these topics:<\/p>\n<ul>\n<li>General structure of an API request<\/li>\n<li>Create your first API request.<\/li>\n<\/ul>\n<p>Let\u2019s get started!<\/p>\n<h3 id=\"1-general-structure-of-an-api-request\">1. General structure of an API request<\/h3>\n<p>The general structure of an API request made in Ruby will always look like this:<\/p>\n<pre tabindex=\"0\"><code>require &#39;net\/http&#39;\nrequire &#39;net\/https&#39;\n\n# Classic (GET)\ndef send_request\n api_key = &#34;YOUR-API-KEY&#34;\n user_url = &#34;YOUR-URL&#34;\n\n uri = URI(&#39;https:\/\/app.scrapingbee.com\/api\/v1\/?api_key=&#39;+api_key+&#39;&amp;url=&#39;+user_url)\n\n # Create client\n http = Net::HTTP.new(uri.host, uri.port)\n http.use_ssl = true\n http.verify_mode = OpenSSL::SSL::VERIFY_PEER\n\n # Create Request\n req = Net::HTTP::Get.new(uri)\n\n # Fetch Request\n res = http.request(req)\n\n # Return Response\n return res\nrescue StandardError =&gt; e\n puts &#34;HTTP Request failed (#{ e.message })&#34;\nend\n<\/code><\/pre><p>And you can do whatever you want with the response variable! For example:<\/p>"},{"title":"Getting started with ScrapingBee's NodeJS SDK","link":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbees-nodejs-sdk\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbees-nodejs-sdk\/","description":"<p>In this tutorial, we will see how you can integrate ScrapingBee\u2019s API with NodeJS using our Software Development Kit (SDK), and use it to scrape web pages. As such, we will cover these topics:<\/p>\n<ul>\n<li>Install ScrapingBee\u2019s NodeJS SDK<\/li>\n<li>Create your first API request. Let\u2019s get started!<\/li>\n<\/ul>\n<h3 id=\"1-install-the-sdk\">1. Install the SDK<\/h3>\n<p>Before using an SDK, we will have to install the SDK. And we can do that using this command:\u00a0<code>npm install scrapingbee<\/code>.<\/p>"},{"title":"Getting started with ScrapingBee's Python SDK","link":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbees-python-sdk\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/getting-started-with-scrapingbees-python-sdk\/","description":"<p>In this tutorial, we will see how you can integrate ScrapingBee\u2019s API with Python using our Software Development Kit (SDK), and use it to scrape web pages. As such, we will cover these topics:<\/p>\n<ul>\n<li>Install ScrapingBee\u2019s Python SDK<\/li>\n<li>Create your first API request.<\/li>\n<\/ul>\n<p>Let's get started!<\/p>\n<h3 id=\"1-install-the-sdk\">1. Install the SDK<\/h3>\n<p>Before using an SDK, we will have to install the SDK. And we can do that using this command:<\/p>\n<p><code>pip install scrapingbee<\/code><\/p>"},{"title":"Crawl4AI web scraping: A guide to AI-friendly web crawling","link":"https:\/\/www.scrapingbee.com\/blog\/crawl4ai\/","pubDate":"Fri, 17 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/crawl4ai\/","description":"<p>If you're building stuff with large language models or AI agents, chances are you'll need web data. And that means writing a crawler, ideally something fast, flexible, and not a total pain to set up. Like, we probably don't want to spend countless hours trying to run a simple &quot;hello world&quot; app. That's where Crawl4AI web scraping comes in.<\/p>\n<p>Crawl4AI is an open-source crawler made by devs, for devs; and if you're asking &quot;what is Crawl4AI?&quot;, it's a tool built for control, speed, and structured output. It gives you control, speed, structured output, and enough room to do serious things without getting buried in boilerplate.<\/p>"},{"title":"API Monitoring Tools Every Developer Should Know","link":"https:\/\/www.scrapingbee.com\/blog\/best-api-analytics\/","pubDate":"Thu, 16 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-api-analytics\/","description":"<p>Whether you are building a microservices architecture or integrating third-party payment gateways, API monitoring is the heartbeat of your system. I've spent years building scrapers and backend services, and I've learned the hard way that &quot;it works on my machine&quot; doesn't mean it stays working at 3:00 AM.<\/p>\n<p>Modern applications depend on robust API monitoring because even a few seconds of downtime can cascade into a total system failure. When you're managing dozens of endpoints, you need more than just a ping; you need a way to ensure API reliability and catch performance bottlenecks before your users do. In this guide, I'll compare the best API monitoring tools available in 2026 to help you make an informed decision for your stack.<\/p>"},{"title":"Best Rotating and Residential Proxies for Web Scraping in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/rotating-proxies\/","pubDate":"Thu, 16 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/rotating-proxies\/","description":"<p>The best rotating proxies are one of the most effective solutions for web scraping because they help avoid blocks by routing requests through trusted IPs that change automatically. Residential proxies use real user IP addresses, which makes them harder for websites to detect than datacenter proxies. Rotating proxies add another layer of protection by switching IPs for each request through a backconnect system.<\/p>\n<p>Providers build these networks in different ways. Some rely on peer-to-peer bandwidth sharing, others use SDKs such as the <a href=\"https:\/\/bright-sdk.com\/\" target=\"_blank\" >Bright SDK<\/a>, and some rent unused ISP bandwidth through networks like <a href=\"https:\/\/divinetworks.com\/\" target=\"_blank\" >Divi+<\/a>. That's also why <a href=\"https:\/\/www.scrapingbee.com\/blog\/isp-proxy\/\" target=\"_blank\" >ISP proxies<\/a> can be a strong option when you need residential IP reputation with datacenter-level performance.<\/p>"},{"title":"The Best eCommerce Scrapers for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-ecommerce-apis\/","pubDate":"Wed, 15 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-ecommerce-apis\/","description":"<p>The rapid growth of ecommerce data extraction has transformed how businesses approach price tracking, inventory monitoring, competitor research, and analytics. This guide compares the top ecommerce scraper solutions across APIs, proxy platforms, and automation tools. Discover which scraping tool fits your technical needs, scale requirements, and data goals.<\/p>\n<p>When building these data pipelines, many developers opt for a <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> to handle the complexities of header rotation and proxy management. Choosing the right ecommerce data scraping tools depends on your specific technical stack and the anti-bot measures of your target sites.<\/p>"},{"title":"11 Best Web Scraping Services in USA (2026)","link":"https:\/\/www.scrapingbee.com\/blog\/best-web-scraping-services\/","pubDate":"Tue, 14 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-web-scraping-services\/","description":"<p>ScrapingBee is the best web scraping service in the USA because it delivers reliable results without forcing you to manage browsers, proxies, or anti-bot workarounds.<\/p>\n<p>Modern scraping has moved far beyond simple HTML downloads. Many sites now require JavaScript rendering, deal with aggressive bot detection, trigger CAPTCHAs, and load key content through multiple API calls. The right provider handles those issues consistently, whether you are building with code or using a no-code workflow.<\/p>"},{"title":"How To Scrape Google Trends Data Using PyTrends","link":"https:\/\/www.scrapingbee.com\/blog\/google-trends-scraper\/","pubDate":"Tue, 14 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/google-trends-scraper\/","description":"<p>A Google Trends scraper automates the collection of search interest data from Google Trends, giving you programmatic access to what the world is searching for. Python is the go-to language for this task thanks to libraries like requests, pandas, and pytrends that make data extraction and analysis straightforward. Instead of manually checking trends one keyword at a time, a scraper lets you monitor thousands of search terms across regions and time periods in minutes.<\/p>"},{"title":"17 Best Web Scraping Tools Tested & Ranked For 2026","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-tools\/","pubDate":"Mon, 13 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-tools\/","description":"<p>The best web scraping tools in 2026 range from lightweight open-source libraries to full-scale scraping platforms, and each one promises speed, intelligence, or &quot;AI-powered&quot; capabilities. Picking the right one comes down to what you actually need it to do.<\/p>\n<p>This guide breaks down 17 web scraping tools based on hands-on testing. You'll see what each tool does well, where it falls short, and what it costs. Whether you're looking for a managed service like ScrapingBee or a free, open-source option you can customize yourself, you'll walk away knowing exactly which tool fits your project.<\/p>"},{"title":"7 Best Idealista Scrapers for Different Use Cases","link":"https:\/\/www.scrapingbee.com\/blog\/idealista-scraper\/","pubDate":"Mon, 13 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/idealista-scraper\/","description":"<p>Finding the right web scraper for idealista is no longer just about downloading HTML. In 2026, it is about navigating one of the most sophisticated anti-bot environments in the real estate sector. To maintain an edge in market research, developers and investors need tools that can handle dynamic HTML structure changes and heavy rate limiting.<\/p>\n<p>The landscape favors reliability and scale. For engineers, API-based solutions that manage residential proxies and headless browsers automatically are the gold standard for gathering structured property data. Meanwhile, no-code tools have become more resilient, allowing non-technical users to easily scrape the listings page without writing a single line of code. Whether you are tracking market trends or building a lead list of real estate agents, these tools balance performance and ease of use for accessing the idealista website.<\/p>"},{"title":"7 Best Web Scraping Tools Python: Top Libraries for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-python-web-scraping-libraries\/","pubDate":"Fri, 10 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-python-web-scraping-libraries\/","description":"<p>ScrapingBee is the best Python web scraping solution for most use cases because it handles the hard parts like sessions, cookies, JavaScript rendering, and common anti-bot defenses, so you can focus on extracting data.<\/p>\n<p>Web scraping is usually harder than it looks. Dynamic pages, login flows, rate limits, CAPTCHAs, and IP blocks can break simple scripts fast. That's why the right library or scraping platform matters.<\/p>\n<p>In this tutorial, I'll walk through the best Python web scraping libraries and tools, starting with ScrapingBee, explain what each one is best at, and help you choose the right fit for your project.<\/p>"},{"title":"8 Best Leads Scrapers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/leads-scraper\/","pubDate":"Fri, 10 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/leads-scraper\/","description":"<p>The best lead scrapers are tools that collect publicly available information from online sources and turn it into a usable lead list. Usually, they serve this data in CSV or JSON format that your CRM can ingest. If your lead generation efforts depend on fresh business leads and accurate company details, these tools can save hours of manual prospecting across web pages like business directories, Google Maps listings, marketplaces, and even a company's Facebook page.<\/p>"},{"title":"5 Best Free Proxy Lists for Web Scraping (2026)","link":"https:\/\/www.scrapingbee.com\/blog\/best-free-proxy-list-web-scraping\/","pubDate":"Thu, 09 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-free-proxy-list-web-scraping\/","description":"<p>ScrapingBee is the best option if your goal is reliable web scraping without the headaches of free proxy lists. In this article, we benchmark five proxy list websites to see which ones still provide usable free proxies, scoring them by response time, error rate, and success rate on real targets like Google and Amazon.<\/p>\n<p>We'll also show how to evaluate a free proxy before you use it (protocol support, anonymity, uptime, and geolocation) and why &quot;free&quot; often comes with tradeoffs: public proxies can be slow, unstable, shared by thousands, and sometimes operated by parties that log traffic, inject ads, or tamper with responses\u2014fine for quick tests, not for anything sensitive.<\/p>"},{"title":"How to Scrape Google News: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-news\/","pubDate":"Thu, 09 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-news\/","description":"<p>In this blog post, I'll show you how to scrape google news with Python and our Google News scraper, even if you're not a Python developer. You'll start with the straightforward RSS feed URL method to grab news headlines in structured XML. Then I'll show you how ScrapingBee's <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a>, our Google News API, and even our <a href=\"https:\/\/www.scrapingbee.com\/features\/google\/\" target=\"_blank\" >Google Search Results API<\/a> can extract public data.<\/p>\n<p>By the end of this guide, you'll have an easy access to the every news title you need without getting bogged down in complex infrastructure. Let's begin!<\/p>"},{"title":"8 Best SERP APIs in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-serp-apis\/","pubDate":"Wed, 08 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-serp-apis\/","description":"<p>Looking for the best SERP API in 2026? You've come to the right place. In my experience working with various search engine data projects, choosing the right API can make or break your entire operation. Some search scraping APIs can be frustrating, as they often yield inconsistent data. Others extract data so smoothly you'll wonder how they make web scraping so easy.<\/p>\n<p>The search engine API market has evolved significantly in 2026, with new players entering the field and established providers upgrading their infrastructure. Whether you're tracking competitor rankings, building local SEO presence, or feeding data into machine learning models, there's never been more choice \u2013 or more confusion about which provider to pick.<\/p>"},{"title":"How to Scrape Amazon Data in 2026 with Python","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-amazon\/","pubDate":"Wed, 08 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-amazon\/","description":"<p>One day you wake up and realize that you need Amazon pricing data which is not available in a structured form, so you need to figure out how to scrape it yourself.<\/p>\n<p>At first, it might sound easy, so you fire a bunch of HTTP requests using your favourite HTTP client\u2026 and you hit a wall: a CAPTCHA, merciless rate limiting, or a page full of JavaScript that your HTTP client can't run.<\/p>"},{"title":"6 Best eBay Web Scrapers In 2026","link":"https:\/\/www.scrapingbee.com\/blog\/ebay-web-scraper\/","pubDate":"Tue, 07 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ebay-web-scraper\/","description":"<p>Finding the best eBay web scraper in 2026 depends entirely on your specific needs. Whether you are a developer looking for a web scraper to power a large-scale competitor analysis or a business owner needing eBay product data for market research, you need a specific toolset. After all, navigating eBay's aggressive anti-scraping measures and complex JavaScript rendering can be a challenge.<\/p>\n<p>In this guide, I compare the top tools on the market, evaluating them on their ability to handle ip rotation, bypass anti-bot measures, and deliver clean, structured data.<\/p>"},{"title":"Best Job Scraping Tools in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/job-scraping-tools\/","pubDate":"Tue, 07 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/job-scraping-tools\/","description":"<p>Job scraping software has become the fastest way to turn messy job postings into clean, analyzable signals about the job market. Instead of clicking through endless filters and tabs, you can programmatically collect listings, normalize them, and reuse the dataset for everything from salary benchmarks to hiring insights.<\/p>\n<p>In this guide, you'll learn which tools are best for different sources (aggregators vs. single boards vs. freelance marketplaces), what to extract, and how to keep your pipeline stable when sites change or fight back.<\/p>"},{"title":"How to scrape Kickstarter data","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-kickstarter\/","pubDate":"Tue, 07 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-kickstarter\/","description":"<p>Scraping Kickstarter data can be tricky, especially since there's no official public Kickstarter API available for developers. In this guide, I'll show you how to scrape Kickstarter data in a clean and reliable way using a dedicated scraping API. Instead of dealing with fragile HTML parsing or reverse engineering internal endpoints, you'll learn how to request Kickstarter pages, extract structured data, and turn it into something you can actually use.<\/p>"},{"title":"How to Scrape Images from a Website with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-images-from-website\/","pubDate":"Fri, 03 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-images-from-website\/","description":"<p>Learning how to scrape images from website sources is a skill that can unlock various benefits. Whether you're extracting product photos for competitive analysis, building datasets or gathering visual content for machine learning projects, you need to know how to scrape.<\/p>\n<p>In this article, I'll walk you through the process of building a website image scraper. But don't worry, you won't have to code everything from scratch. ScrapingBee's <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> allows automating content collection with minimal technical knowledge. The best part, it has built-in technical infrastructure, so you don't need to think about proxies, JavaScript rendering or other difficulties. Let me show exactly how it works.<\/p>"},{"title":"How to Scrape Indeed Job Listings with BeautifulSoup & ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-indeed\/","pubDate":"Fri, 03 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-indeed\/","description":"<p>In this guide, we'll dive into how to scrape Indeed job listings without getting blocked. The first time I tried to extract job data from this website, it was tricky. I thought a simple requests.get() would do the trick, but within minutes I was staring at a CAPTCHA wall. That's when I realized I needed a proper <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraper with proxy rotation<\/a> and headers baked in to scrape job listing data.<\/p>"},{"title":"Playwright MCP - Scraping Smithery MCP database Tutorial with Cursor","link":"https:\/\/www.scrapingbee.com\/blog\/playwright-mcp-web-scraping-smithery-tutorial-cursor\/","pubDate":"Thu, 02 Apr 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/playwright-mcp-web-scraping-smithery-tutorial-cursor\/","description":"<p>AI is getting the same upgrade humans once did: tool use. With the <a href=\"https:\/\/modelcontextprotocol.io\/introduction\" target=\"_blank\" >Model Context Protocol (MCP)<\/a>, AI can now interact with browsers, APIs, and files - not just generate text.<\/p>\n<p>In this guide, you'll see how to use Playwright MCP in Cursor to scrape data from <a href=\"https:\/\/smithery.ai\/\" target=\"_blank\" >smithery.ai<\/a> and see how much further you can push yourself away from having to write code for a web scraping task. You'll learn how to set it up, run your first scraping task, and understand where this approach works - and where it breaks down.<\/p>"},{"title":"9 Best ChatGPT Interface Scraper Tools in 2026 (Tested & Compared)","link":"https:\/\/www.scrapingbee.com\/blog\/best-chatgpt-scraper-tools\/","pubDate":"Tue, 31 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-chatgpt-scraper-tools\/","description":"<p>In 2026, extracting data from AI interfaces is essential for building agents, monitoring LLM performance, or automating workflows. A ChatGPT scraper tool is specialized software designed to navigate OpenAI's dynamic, React-based environment to extract text, code, or metadata.<\/p>\n<p>Unlike the official OpenAI API used for generating content, interface scrapers retrieve data directly from the web application. This is vital for accessing public GPTs or shared links not exposed via standard endpoints. While some tools offer &quot;point-and-click&quot; simplicity, others, like our <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a>, provide the raw infrastructure of proxies and headless browsers needed for custom, high-scale builds. In this guide, I evaluate the best ChatGPT scraper tools based on their reliability and ability to bypass sophisticated anti-bot measures.<\/p>"},{"title":"How To Build An Automated AI Web Scraper With n8n In 2026","link":"https:\/\/www.scrapingbee.com\/blog\/n8n-no-code-web-scraping\/","pubDate":"Tue, 31 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/n8n-no-code-web-scraping\/","description":"<p>For most of the last decade, collecting data from the web meant two things: open 10+ tabs, copy values into a spreadsheet, and call it research \u2014 or write Python. I did the latter \u2014 custom scripts, proxy management, CSS selectors that broke every time a site sneezed.<\/p>\n<p>Web scraping has a reputation for being technical. That reputation is about 3 years out of date.<\/p>\n<p>What changed everything was pairing n8n's visual workflow builder with an <a href=\"https:\/\/www.scrapingbee.com\/features\/ai-web-scraping-api\/\" target=\"_blank\" >AI Web Scraping API<\/a>. Instead of targeting specific HTML elements, you describe what you want in plain English. When the site redesigns, the workflow doesn't notice.<\/p>"},{"title":"How to Download Files with cURL (Commands + Examples)","link":"https:\/\/www.scrapingbee.com\/blog\/how-download-files-via-curl-tutorial-with-examples\/","pubDate":"Tue, 31 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-download-files-via-curl-tutorial-with-examples\/","description":"<p>Most developers know you can download files with cURL \u2014 but almost nobody uses more than 3% of what it can actually do. cURL is now running on over 20 billion devices worldwide\u2026yes, you read that right. It ships by default on macOS, Windows 10+, and virtually every Linux server on the planet. It's inside your phone, your smart TV, your car, and the firmware of devices you've never thought twice about. It is, quite plausibly, the most installed piece of software ever written.<\/p>"},{"title":"How to Scrape Google Jobs: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-jobs\/","pubDate":"Mon, 30 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-jobs\/","description":"<p>In this guide, we'll show you how to scrape Google Jobs listing results using our <a href=\"https:\/\/www.scrapingbee.com\/features\/google\/\" target=\"_blank\" >Google Search API<\/a>. We'll build a simple Python script, render the jobs panel, and extract structured job data step by step. By the end, you'll be able to collect job titles, companies, locations, and posting dates programmatically.\nMany of our <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/google-jobs-scraper-api\/\" target=\"_blank\" >Google Jobs Scraper<\/a> users struggle with rendering the jobs panel correctly, constructing valid search queries, and parsing the job listings that appear dynamically on the page. We'll cover each of these in simple steps so you can build a working scraper without running into those common issues.<\/p>"},{"title":"Open Source Web Scraper: Best Tools and How to Choose","link":"https:\/\/www.scrapingbee.com\/blog\/open-source-web-scraper\/","pubDate":"Mon, 30 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/open-source-web-scraper\/","description":"<p>Open source web scraping is the fastest way to turn public web pages into something your app, dashboard, or model can use. At a basic level, the best open source web scraper sends HTTP requests, downloads HTML and XML documents, and then runs data extraction logic to pull the fields you care about.<\/p>\n<p>But there's a catch. The &quot;best&quot; depends on what you're scraping and how you ship it. Some stacks shine on simple pages where you just need to scrape data with a few CSS selectors. Others are built for dynamic websites where the page only renders after JavaScript execution. And if you're running serious data collection at scale, anti-bot systems and infrastructure start to matter as much as code.<\/p>"},{"title":"8 Best Web Search APIs For AI Agents In 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-ai-search-api\/","pubDate":"Thu, 26 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-ai-search-api\/","description":"<p>The landscape of the internet has shifted significantly. In 2026, we are no longer just building websites; we are building autonomous agents that need to perceive the world in real-time. Whether you are working on advanced Retrieval-Augmented Generation (RAG) systems or LLM-powered market analysts, your model is only as good as the data it can ingest.<\/p>\n<p>That's why choosing the best AI search API is no longer a luxury. It is a critical infrastructure decision that dictates the accuracy, freshness, and scalability of your application. I have spent the last few months testing various stacks, and I have realized that the search problem usually falls into three buckets: semantic search APIs for meaning-based retrieval, SERP APIs for traditional engine results, and web scraping APIs for those of us who need to own the entire data acquisition layer.<\/p>"},{"title":"Python Web Scraping Tutorial for 2026 with Examples & Pro Tips","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/","pubDate":"Thu, 26 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/","description":"<p>In this <strong>Python web scraping tutorial<\/strong>, I'll show you how web apps extract and display data from other websites in real time, with structured guidance from beginner basics to more advanced techniques.<\/p>\n<p>In my personal experience, Python is a very powerful tool for automating data extraction from websites and one of the most powerful and versatile languages for web scraping, thanks to its vast array of libraries and frameworks.<\/p>\n<p>By the end of this article, you will learn web scraping with Python and be ready to scrape the web like a pro. So without further ado, let's get started!<\/p>"},{"title":"6 Best Web Scraping Service Providers","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-service-provider\/","pubDate":"Wed, 25 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-service-provider\/","description":"<p>The best web scraping solutions aren't just about downloading web pages. Instead, they provide reliable web data output that keeps flowing even when websites change or block requests.<\/p>\n<p>Teams typically use web scraping tools and data extraction tools for market research, competitive intelligence, price monitoring, and brand monitoring, where delays or broken scripts can quickly turn into missed opportunities.<\/p>\n<p>The real difference between web scraping service providers becomes clear once you move past a demo. Can the service handle complex websites and still return scraped data from web pages and in the data formats your team needs? Does it help you extract data and automate data extraction workflows in a way that enables users to produce actionable data, with predictable data delivery you can depend on?<\/p>"},{"title":"How to bypass PerimeterX anti-bot protection system in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-bypass-perimeterx-anti-bot-system\/","pubDate":"Tue, 24 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-bypass-perimeterx-anti-bot-system\/","description":"<p>In this guide, we explain <strong>how to bypass PerimeterX bot protection in 2026<\/strong>. We'll cover how the system works, what triggers blocks, and the practical techniques you can use to avoid detection.<\/p>\n<p>Before we get started, please note: In 2024, PerimeterX was rebranded to HUMAN Security, but its core detection methods largely remain the same.<\/p>\n<h2 id=\"tldr-perimeterx-bypass-in-a-nutshell\">TL;DR: PerimeterX bypass in a nutshell<\/h2>\n<p>To bypass PerimeterX in 2026, your requests must behave like a real user across every layer at once, including IP quality, TLS and HTTP signals, browser fingerprint, session continuity, and on-page behavior. The most reliable approach is to use real browser environments or scraping APIs that handle these signals together, rather than trying to patch individual issues.<\/p>"},{"title":"How to manage price scraping with Python: A guide to price tracking","link":"https:\/\/www.scrapingbee.com\/blog\/price-scraping-python\/","pubDate":"Mon, 23 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/price-scraping-python\/","description":"<p>Price scraping Python is one of the easiest ways to keep track of product prices across websites without doing everything manually. Instead of checking the same pages again and again, a small script can collect pricing data, store the results, and highlight changes right away.<\/p>\n<p>This approach works well for many cases: monitoring competitors, tracking discounts, or making sure a product isn't overpriced. And this isn't just for developers \u2014 anyone curious enough can pick this up and build something useful pretty quickly.<\/p>"},{"title":"How to build a private proxy server with sing-box, VLESS, Hysteria2 and SOCKS on Linux","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-build-private-proxy-server\/","pubDate":"Tue, 17 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-build-private-proxy-server\/","description":"<p>In this guide we will walk through <strong>how to build your own proxy server<\/strong> on Linux using sing-box with a few well-known protocols: SOCKS, VLESS with Reality, and Hysteria2.<\/p>\n<p>The idea is to put together a small but flexible proxy setup that <em>you control yourself<\/em>. Instead of relying on some external service, you run the whole thing on your own VPS. That gives you more visibility into what is actually happening and helps you understand how modern proxy stacks work under the hood.<\/p>"},{"title":"Google Ads competitor analysis: Step by step guide","link":"https:\/\/www.scrapingbee.com\/blog\/google-ads-competitor-analysis\/","pubDate":"Thu, 12 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/google-ads-competitor-analysis\/","description":"<p><strong>Google Ads competitor analysis<\/strong> is the process of looking at the advertisers that show up next to you in search results and figuring out how they compete for the same clicks. For PPC marketers and founders, it's one of the best ways to improve campaigns. Instead of guessing what might work, you can look at what other advertisers in the market are already testing and how they frame their offers.<\/p>"},{"title":"Python web scraping JavaScript: How to scrape dynamic pages","link":"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-javascript\/","pubDate":"Mon, 09 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-javascript\/","description":"<p>Python web scraping JavaScript pages can feel confusing the first time you try it. You write a simple scraper with <code>requests<\/code> and BeautifulSoup, run it against a website, and instead of useful data you get an almost empty page. Meanwhile the browser clearly shows tables, prices, comments, or products.<\/p>\n<p>The reason is simple: many modern websites build their content with JavaScript after the page loads. Your browser runs those scripts automatically, but a basic Python scraper only downloads the initial HTML.<\/p>"},{"title":"How to use curl to show response headers","link":"https:\/\/www.scrapingbee.com\/blog\/curl-show-response-headers\/","pubDate":"Fri, 06 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/curl-show-response-headers\/","description":"<p>If you want to <strong>use curl to show response headers<\/strong>, you are in the right place. Response headers reveal important details about the server's reply, including status codes, content types, caching rules, cookies, and more. Once you know how to inspect them, debugging APIs and websites becomes much easier.<\/p>\n<p>In this guide, you will learn a few different ways to do it, from quick header checks to full request debugging. We're going to walk through the most useful curl flags, explain common headers, and show practical examples using APIs and real web requests.<\/p>"},{"title":"A step-by-step guide to scraping Zoro.com","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-zoro-dot-com\/","pubDate":"Mon, 02 Mar 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-zoro-dot-com\/","description":"<p>If you've ever tried to figure out <strong>how to scrape zoro.com<\/strong> for real product and pricing insights, you already know why people chase structured Zoro data. Zoro carries a massive catalog, tons of specs, and price shifts that matter for research, monitoring, and competitive analysis. The problem isn't finding the information: it's collecting it consistently without fighting the site every other day.<\/p>\n<p>That's what this guide is about: a practical walkthrough of responsible Zoro scraping using ScrapingBee's <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a>. You don't need to be a hardcore developer to follow along, and even if you <em>are<\/em> one, this approach saves you from maintaining your own proxy pool, browser automation, or endless broken selectors.<\/p>"},{"title":"How to hide your IP address safely online","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-hide-ip-address\/","pubDate":"Thu, 19 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-hide-ip-address\/","description":"<p>If you're trying to figure out <strong>how to hide your IP address<\/strong>, chances are you care about privacy, you're hitting annoying geo blocks, or your scraping script just got rate-limited again. Good news: this isn't some dark hacker ritual. It's mostly about understanding what your IP actually does, what tools exist, and what trade-offs come with each one.<\/p>\n<p>In this guide, we'll walk through the practical stuff. What an IP really reveals. How VPNs, proxies, Tor, mobile data, and even public Wi-Fi change your exposure. What works for casual browsing versus scraping workflows. And what absolutely does not work, no matter what Reddit says.<\/p>"},{"title":"How to get someone's IP address safely and legally","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-track-ip-address\/","pubDate":"Sun, 15 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-track-ip-address\/","description":"<p>If you're wondering <strong>how to get someone's IP<\/strong>, you're in the right place. Today we're going to break it down in a simple and clear way so you actually understand what's going on.<\/p>\n<p>But hold your horses, cowboy, this is not a hacking guide and it's definitely not about stalking, harassing, or exposing anyone. We're going to explain what an IP address really is, what it can realistically show you, and why privacy and basic respect matter way more than trying to track people. By the end, you'll get the technical side and the clear boundaries that should never be crossed.<\/p>"},{"title":"Fast Search API: Real-time SERP data for AI agents, LLM training, and competitive intelligence","link":"https:\/\/www.scrapingbee.com\/blog\/fast-search-api\/","pubDate":"Wed, 11 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/fast-search-api\/","description":"<p><strong>Fast search<\/strong> is basically the backbone for many modern AI setup now. Agents, chatbots, RAG loops, analytics tools, even custom <a href=\"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-all-text-from-a-website-for-llm-ai-training\/\" target=\"_blank\" >LLM training<\/a> \u2014 all of them need fresh web data to stay useful. Your model can be a genius and your prompts can be perfect, but if the info feeding it is stale, the whole thing falls apart.<\/p>\n<p>And this is where the pain usually kicks in. The product is growing, users are happy, everything looks good, but the search layer is the part that keeps slowing things down. Captchas, IP blocks, flaky scrapers, random breakages, the classic &quot;why did our SERP job die again?&quot; Even when it behaves, it's often slow, fragile, and eats way too much engineering time.<\/p>"},{"title":"Scrapling: Adaptive Python web scraping library that handles website structure changes","link":"https:\/\/www.scrapingbee.com\/blog\/scrapling-adaptive-python-web-scraping\/","pubDate":"Wed, 11 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapling-adaptive-python-web-scraping\/","description":"<p><strong>Scrapling<\/strong> is blowing up right now with nearly 9k stars on GitHub, and for good reason: anyone who's done Python web scraping knows the pain of a site changing one tiny thing and breaking your whole setup. A <code>div<\/code> moves, an attribute disappears, the markup shuffles a bit \u2014 boom, your selectors die, your pipeline stalls, and you're debugging instead of shipping.<\/p>\n<p>Most classic tools still fall into this trap. They work fine until the layout shifts or an anti-bot wall wakes up, and suddenly you're playing whack-a-mole with CSS paths, headless browser quirks, and Cloudflare mood swings. Scrapling tries to stop that mess. It's an adaptive web scraping library that keeps track of elements even when the structure changes, so your scrapers keep running instead of collapsing. Plus, it brings stealth fetching, strong performance, and an API that feels familiar if you've used BeautifulSoup, Selectolax, Selenium, or any of the usual suspects.<\/p>"},{"title":"Top 5 Flight APIs in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/top-flights-apis-for-travel-apps\/","pubDate":"Tue, 10 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/top-flights-apis-for-travel-apps\/","description":"<p>A flight data API is the fastest way to ship reliable search, pricing, and monitoring features in travel products, without building a brittle crawler from scratch. In 2026, most teams use flight APIs for flight prices, aviation data, and availability across major booking platforms, plus operational signals like schedule changes and cancellations to power alerts and smarter decisions.<\/p>\n<p>In this guide, I\u2019ll review five popular options, when each one shines, and what you can do when official endpoints don\u2019t exist. I\u2019ll also show how to use ScrapingBee to scrape results when you need coverage that APIs can\u2019t enable (or when quotas, contracts, or geography get in the way). Let's dive right in!<\/p>"},{"title":"6 Best Node.js Web Scrapers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-node-js-web-scrapers\/","pubDate":"Mon, 09 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-node-js-web-scrapers\/","description":"<p>If you\u2019re doing\u00a0web scraping with JavaScript\u00a0in 2026, you\u2019ll usually pick between two approaches: fast\u00a0HTTP requests\u00a0to grab HTML\/JSON, or real browser automation for\u00a0dynamic web pages\u00a0that only reveal content after scripts run.<\/p>\n<p>This article covers both camps. Whether you\u2019re building a quick\u00a0NodeJS web scraper\u00a0or tackling anti-bot roadblocks, you can choose the right tool and move on.<\/p>\n<h2 id=\"quick-summary-of-top-6-nodejs-web-scrapers\">Quick Summary of Top 6 Node.js Web Scrapers<\/h2>\n<p>These days, Node.js web scraping usually splits into two workflows. For speed and scale across multiple pages, you\u2019ll lean on request-first tools (Axios or Superagent) and focus on parsing html. But if your target element only appears after scripts run (I'm talking about dynamic content and JavaScript-heavy sites), you\u2019ll need automation that drives real web browsers with a browser instance (Puppeteer\/Playwright), typically in headless mode.<\/p>"},{"title":"7 Best Real Estate Scrapers (Comparison)","link":"https:\/\/www.scrapingbee.com\/blog\/real-estate-scraper\/","pubDate":"Sat, 07 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/real-estate-scraper\/","description":"<h2 id=\"7-best-real-estate-scrapers\">7 Best Real Estate Scrapers<\/h2>\n<p>Real estate data scraping has become the fastest way to build repeatable pipelines for pricing, comps, and lead generation, without manually opening dozens of tabs. Whether you need structured real estate data for analytics or want to monitor the real estate market daily, the right tool makes the difference between a stable dataset and a constant game of whack-a-mole.<\/p>\n<p>In this guide, I compare 7 options built for collecting large-scale property datasets from major portals and real estate listing sites. You\u2019ll learn which platforms are easiest to set up, which are enterprise-grade, and which are best for teams that need all the data without building a full scraping stack in-house.<\/p>"},{"title":"Top 5 Web Data Mining Tools (Comparison)","link":"https:\/\/www.scrapingbee.com\/blog\/web-data-mining\/","pubDate":"Fri, 06 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-data-mining\/","description":"<p>Web data mining tools help you turn the vast data of the World Wide Web into something usable, from competitor tracking on e-commerce websites to monitoring brand reputation and spotting shifts in demand. The catch is that web mining has to deal with structured and unstructured data, including messy web data like HTML, plus signals such as hyperlink contents and usage that reflect how users navigate and interact with pages.<\/p>"},{"title":"How to handle timeouts in Python Requests","link":"https:\/\/www.scrapingbee.com\/blog\/python-requests-timeout\/","pubDate":"Mon, 02 Feb 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-requests-timeout\/","description":"<p>If you've ever run a scraper or API script and it just sat there doing nothing, there's a good chance you hit a <strong>Python Requests timeout<\/strong> without even noticing. A missing or poorly chosen timeout can make a simple job freeze, waste runtime, or stall an entire scraping workflow. Getting your Python requests timeout settings right isn't optional: it's what keeps your scripts fast, predictable, and sane.<\/p>\n<p>In this guide we'll break down how Requests timeout behavior actually works, clean up the common traps in native Requests, and show where better retry patterns and safer defaults save you a lot of pain. And when the real issue isn't your code at all (bot protection, heavy client-side rendering, IP throttling) we'll talk about the point where a Python request timeout stops being a code tweak and starts being a job for a managed layer like a proper web scraping API.<\/p>"},{"title":"Scraping Amazon product data with Python","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-amazon-product-data\/","pubDate":"Thu, 22 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-amazon-product-data\/","description":"<p><strong>Amazon API scraping<\/strong> is the most reliable way to pull product data without fighting Amazon's HTML, anti-bot rules, or constant layout changes. Instead of wrestling with proxies and brittle selectors, you call an endpoint and get clean Amazon product data ready for analysis: titles, prices, ratings, images, descriptions, reviews, availability, all structured in one JSON.<\/p>\n<p>In this guide you'll see how to scrape Amazon product data with Python using an API-first workflow. We'll still touch on classic HTML concepts so you know what the API replaces, but the focus is on stable, low-maintenance Amazon product data scraping rather than building fragile scrapers.<\/p>"},{"title":"Best Google Trends Scraping APIs for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-google-trends-api\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-google-trends-api\/","description":"<p>Google\u2019s official launch of the Google Trends API in 2025 marks a significant milestone in how developers and businesses access trend data. Programmatic access to Google Trends data is invaluable. It empowers marketers, analysts, and developers to automate trend tracking, integrate insights into dashboards, and build data-driven applications that respond to real-time shifts in public interest.<\/p>\n<p>However, challenges remain: Google\u2019s rate limits, JavaScript-heavy interfaces, and anti-bot defenses make reliable data extraction tricky. This is where <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >flexible scraping APIs<\/a>, like ScrapingBee, come into play, offering robust alternatives or complements to the official API.<\/p>"},{"title":"Best Rank Tracking APIs for Developers & Agencies","link":"https:\/\/www.scrapingbee.com\/blog\/best-rank-tracker-apis\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-rank-tracker-apis\/","description":"<p>In the SEO world, thing changes fast. One thing that has already become obsolete is manual keyword rank checking. As websites expand and keyword lists balloon, traditional methods prove inefficient, inconsistent, and error-prone. This is where rank tracking APIs come into play. These APIs automatically collect Search Engine Results Page (SERP) data across different locations and devices, enabling you to build automated, scalable keyword rank tracking systems.<\/p>\n<p>This guide dives into the best rank tracking APIs available in 2026, comparing their features, pricing, and use cases. We\u2019ll also explore why a scraping engine like ScrapingBee often makes the smartest choice as the underlying SERP data layer powering your custom rank tracking system. So let's get into it!<\/p>"},{"title":"Cloudflare Scraper: How to Bypass Cloudflare With ScrapingBee API","link":"https:\/\/www.scrapingbee.com\/blog\/cloudflare-scraper\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/cloudflare-scraper\/","description":"<p>Having an effective Cloudflare scraper opens a whole new world of public data that you can extract with automated connections. Because basic scrapers fail to utilize dynamic fingerprinting methods and proxy rotation, they cannot access many protected platforms due to rate limits, IP blocks, and CAPTCHA challenges.<\/p>\n<p>In this guide, we try to help upcoming businesses and freelancers to reliably fetch pages protected by Cloudflare using our beginner-friendly HTML API. Here, we will explain the common JavaScript rendering challenges, device fingerprinting issues, and how our Python SDK resolves them under the hood through the provided API parameters. Follow the steps to build a small, testable proof of concept before scaling.<\/p>"},{"title":"How to Scrape Amazon Reviews With Python (2026)","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-amazon-reviews\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-amazon-reviews\/","description":"<p>Amazon review scraping is a great way for other retailers to learn about customer wants and needs through one of the biggest retailers in e-commerce. However, many are discouraged from trying it due to the technical barrier of writing code. If you want an easier way to collect review data, our <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/amazon-review-api\/\" target=\"_blank\" >Amazon Review Scraper API<\/a> provides a ready-to-use solution.<\/p>\n<p>If you want to access Amazon product reviews in a user-friendly way, there is no better combo than working with our HTML API through Python and its many additional libraries that help extract data from product pages. In this guide, we will cover the basics of targeting local Amazon reviews, so follow along and soon you'll be able to test the service, guaranteeing a reliable web scraping experience.<\/p>"},{"title":"How to Scrape Data in Go Using Colly","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-data-in-go-using-colly\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-data-in-go-using-colly\/","description":"<p><a href=\"https:\/\/go.dev\/\" target=\"_blank\" >Go<\/a> is a versatile language with packages and frameworks for doing almost everything. Today you will learn about one such framework called <a href=\"https:\/\/go-colly.org\/\" target=\"_blank\" >Colly<\/a> that has greatly eased the development of web scrapers in Go.<\/p>\n<p>Colly provides a convenient and powerful set of tools for extracting data from websites, automating web interactions, and building web scrapers. In this article, you will gain some practical experience with <a href=\"https:\/\/go-colly.org\/\" target=\"_blank\" >Colly<\/a> and learn how to use it to scrape comments from <a href=\"https:\/\/news.ycombinator.com\/news\" target=\"_blank\" >Hacker News<\/a>.<\/p>"},{"title":"How to Scrape Google Hotels: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-hotels\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-hotels\/","description":"<p>Learning how to scrape Google Hotels opens up opportunities to gain a competitive edge for your business. When you scrape this specialized search engine, you gain access to valuable pricing and availability data that can transform your competitive analysis. By using targeted scraping methods, you can collect all the hotel data that fuels market research, tracks pricing changes in real time, and supports strategic decisions.<\/p>\n<p>However, even experienced developers struggle to scrape Google Hotels without getting blocked. IP blocks, CAPTCHAs, and JavaScript rendering issues create significant hurdles when trying to extract hotel data. But don\u2019t worry \u2013 our powerful <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> helps you overcome these challenges.<\/p>"},{"title":"How to Scrape Google Maps: A Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-maps\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-maps\/","description":"<p>Need business leads or location data from Google Maps but frustrated by constant CAPTCHAs, IP blocks, or unreliable scraping scripts? Scraping is one of the fastest ways to gather high-value information, but Google\u2019s aggressive anti-bot measures turn large-scale data collection into a real challenge.<\/p>\n<p>Access to business names, addresses, ratings, and phone numbers is too valuable to ignore, so users keep finding ways around Google\u2019s automation blocks. But how exactly do they do it?<\/p>"},{"title":"How to Scrape Google Shopping: A Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-shopping\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-shopping\/","description":"<p>In this guide we\u2019ll dive into Google Shopping scraping techniques that actually work in 2026. If you\u2019ve ever needed to extract product data, prices, or seller information from Google Shopping, you\u2019re in the right place. Google Shopping scraping has become essential for businesses that need competitive pricing data. I\u2019ve spent years refining these methods, and today I\u2019ll show you how to use ScrapingBee to make this process straightforward and reliable.<\/p>"},{"title":"How to Web Scrape Yelp.com","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-yelp\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-yelp\/","description":"<p>With more than 199 million reviews of businesses worldwide, Yelp is one of the biggest websites for crowd-sourced reviews. In this article, you will learn how to scrape data from Yelp's search results and individual restaurant pages. You will be learning about the different Python libraries that can be used for web scraping and the techniques to use them effectively.<\/p>\n<p>If you have never heard about Yelp before, it is an American company that crowd-sources reviews for local businesses. They started as a reviews company for restaurants and food businesses but have lately been branching out to cover additional industries as well. Yelp reviews are very important for food businesses as they directly affect their revenues. A restaurant owner told <a href=\"https:\/\/hbswk.hbs.edu\/item\/the-yelp-factor-are-consumer-reviews-good-for-business\" target=\"_blank\" >Harvard Business Review<\/a>:<\/p>"},{"title":"HTML Web Scraping Tutorial","link":"https:\/\/www.scrapingbee.com\/blog\/html-scraping\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/html-scraping\/","description":"<p>Over the last two decades, HTML scraping has transformed how we approach market research. While the internet continues to reimagine how we extract and analyze information, we have many different ways to scrape HTML, all of which are different in their approach and complexity.<\/p>\n<p>In this tutorial, we will show how to combine the basics of traditional HTML data collection with the powerful extraction capabilities of our <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >scraping API<\/a>. This approach will help you create a clear and consistent method for automated data extractions. Let's dive in!<\/p>"},{"title":"Practical XPath for Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/practical-xpath-for-web-scraping\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/practical-xpath-for-web-scraping\/","description":"<p>XPath is a technology that uses path expressions to select nodes or node-sets in an XML document (or in our case an HTML document). Even if XPath is not a programming language in itself, it allows you to write an expression which can directly point to a specific HTML element, or even tag attribute, without the need to manually iterate over any element lists.<\/p>\n<p>It looks like the perfect tool for web scraping right? At ScrapingBee we love XPath! \u2764\ufe0f<\/p>"},{"title":"Scrape Amazon products' price with no code","link":"https:\/\/www.scrapingbee.com\/blog\/nocode-amazon\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/nocode-amazon\/","description":"<p>It's safe to assume that many of us had bookmarked Amazon product pages from several retailers for a similar product to easily compare pricing.<\/p>\n<p>This article will guide you through scraping product information from <a href=\"http:\/\/amazon.com\/\" target=\"_blank\" >Amazon.com<\/a> so you never miss a great deal on a product. You will monitor similar ******product pages and compare the prices.<\/p>\n<p>This tutorial is designed so that you can follow along smoothly if you already know the basic concepts. Here's what we'll do:<\/p>"},{"title":"Scraping single page applications with Python.","link":"https:\/\/www.scrapingbee.com\/blog\/scraping-single-page-applications\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scraping-single-page-applications\/","description":"<p>Dealing with a website that uses lots of Javascript to render their content can be tricky. These days, more and more sites are using frameworks like Angular, React, Vue.js for their frontend.<\/p>\n<p>These frontend frameworks are complicated to deal with because there are often using the newest features of the HTML5 API.<\/p>\n<p>So basically the problem that you will encounter is that your headless browser will download the HTML code, and the Javascript code, but will not be able to execute the full Javascript code, and the webpage will not be totally rendered.<\/p>"},{"title":"Using CSS Selectors for Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/using-css-selectors-for-web-scraping\/","pubDate":"Wed, 21 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/using-css-selectors-for-web-scraping\/","description":"<p>In today's article we are going to take a closer look at CSS selectors, where they originated from, and how they can help you in extracting data when scraping the web.<\/p>\n<blockquote>\n<p>\u2139\ufe0f If you already read the article &quot;<a href=\"https:\/\/www.scrapingbee.com\/blog\/practical-xpath-for-web-scraping\/\" >Practical XPath for Web Scraping<\/a>&quot;, you'll probably recognize more than just a few similarities, and that is because XPath expressions and CSS selectors actually are quite similar in the way they are being used in data extraction.<\/p>"},{"title":"API for dummies: Start building your first API today","link":"https:\/\/www.scrapingbee.com\/blog\/api-for-dummies-learning-api\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/api-for-dummies-learning-api\/","description":"<p>If you've been hunting for an easy <strong>API for dummies guide<\/strong> that finally explains what all the fuss is about, you're in the right place. Ever wondered how your favorite apps and websites manage to talk to each other so smoothly? That's where APIs come in.<\/p>\n<p>API stands for Application Programming Interface, but don't let that technical name scare you off. In plain English, an API is like a bridge that lets different software systems exchange data or use each other's features without needing to know what's happening behind the scenes.<\/p>"},{"title":"Automated Web Scraping - Benefits and Tips","link":"https:\/\/www.scrapingbee.com\/blog\/automated-web-scraping\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/automated-web-scraping\/","description":"<p>Looking for ways to automate web scraping tools to quickly collect public data online? In the data-driven world, manual aggregation methods cannot compete with the speed of automated growth. Manual scraping is way too slow, error-prone, and not scalable.<\/p>\n<p>Automated web scraping solutions remove the need for monotonous and inefficient tasks, allowing our bots and APIs to do what they do best \u2013 execute a recurring set of instructions at far greater speeds. In this guide, we will discuss the necessity of automated connections for data extraction and include some actionable tips that will get you started without prior programming knowledge. Let's get to work!<\/p>"},{"title":"Best Bing Rank Tracking Tools and Bing Search API Alternatives","link":"https:\/\/www.scrapingbee.com\/blog\/best-bing-rank-tracker\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-bing-rank-tracker\/","description":"<p>Tracking your website\u2019s position on Bing is essential for a comprehensive SEO strategy. While Google dominates search, Bing powers results across Microsoft Edge, Windows devices, Yahoo, and various privacy-focused engines. Which is why neglecting Bing means overlooking a significant source of organic traffic and potential conversions.<\/p>\n<p>In this guide, I'll explore the top Bing rank tracking tools for 2026, underscore the continued importance of Bing tracking, and examine alternatives to the recently retired Bing Search APIs. Whether you prefer turnkey dashboard solutions or aim to build a custom Bing SERP tracker using APIs like ScrapingBee, this comprehensive resource provides the insights you need to succeed in the evolving search landscape.<\/p>"},{"title":"Can you get SOCKS5 for free?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/can-you-get-socks5-for-free\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/can-you-get-socks5-for-free\/","description":"<h2 id=\"what-is-the-socks5-protocol\">What Is The SOCKS5 Protocol?<\/h2>\n<p>SOCKS is an internet protocol used for proxies, i.e. to enable a client and a server machine to communicate over the internet without knowing each other, by means of an intermediary proxy server. SOCKS5 is the most recent version of this protocol, designed to be an upgrade to its predecessors SOCKS4 and SOCKS4a. SOCKS5 offers authentication support and includes support for IPv6 and UDP.<\/p>\n<h2 id=\"common-use-cases-for-socks5-proxies\">Common Use Cases For SOCKS5 Proxies<\/h2>\n<p>In the world of web scraping, the most common use case of a proxy is to mask the IP address of the client making the HTTP request to the website being scraped. Proxies primarily help mask the IP address of the client from the server. This could be useful for privacy reasons, to bypass geographical restrictions, or to make requests from multiple IP addresses using multiple proxies to bypass IP-based rate limiting.<\/p>"},{"title":"Comparing Forward Proxies and Reverse Proxies","link":"https:\/\/www.scrapingbee.com\/blog\/comparing-forward-proxies-and-reverse-proxies\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/comparing-forward-proxies-and-reverse-proxies\/","description":"<p>In an age dominated by the internet, where data flows ceaselessly between devices and servers, proxies have grown to become an integral part of networks. Proxies play a vital role in the seamless exchange of information on the web.<\/p>\n<p>Proxies act as digital intermediaries, facilitating secure and efficient communication between your device and the destination server. There are two types, forward proxies and reverse proxies, each serving a distinct function.<\/p>"},{"title":"Easy web scraping with Scrapy","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-scrapy\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-scrapy\/","description":"<p>In the previous post about <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/\" target=\"_blank\" >Web Scraping with Python<\/a> we talked a bit about Scrapy. In this post we are going to dig a little bit deeper into it.<\/p>\n<p>Scrapy is a wonderful open source Python web scraping framework. It handles the most common use cases when doing web scraping at scale:<\/p>\n<ul>\n<li>Multithreading<\/li>\n<li>Crawling (going from link to link)<\/li>\n<li>Extracting the data<\/li>\n<li>Validating<\/li>\n<li>Saving to different format \/ databases<\/li>\n<li>Many more<\/li>\n<\/ul>\n<p>The main difference between Scrapy and other commonly used libraries, such as Requests \/ BeautifulSoup, is that it is opinionated, meaning it comes with a set of rules and conventions, which allow you to solve the usual web scraping problems in an elegant way.<\/p>"},{"title":"Getting Started with chromedp","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-chromedp\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-chromedp\/","description":"<p><a href=\"https:\/\/pkg.go.dev\/github.com\/chromedp\/chromedp\" target=\"_blank\" >chromedp<\/a> is a Go library for interacting with a headless Chrome or Chromium browser.<\/p>\n<p>The <code>chromedp<\/code> package provides an API that makes controlling Chrome and Chromium browsers simple and expressive, allowing you to automate interactions with websites such as navigating to pages, filling out forms, clicking elements, and extracting data. It's useful for simplifying <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> as well as testing, performance monitoring, and developing browser extensions.<\/p>\n<p>This article provides an overview of chromedp's advanced features and shows you how to use it for web scraping.<\/p>"},{"title":"Getting Started with Jaunt Java","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-jaunt-java\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-jaunt-java\/","description":"<p>While Python and Node.js are popular platforms for writing scraping scripts, <a href=\"https:\/\/jaunt-api.com\/index.htm\" target=\"_blank\" >Jaunt<\/a> provides similar capabilities for Java.<\/p>\n<p>Jaunt is a Java library that provides web scraping, web automation, and JSON querying abilities. It relies on a light, headless browser to load websites and query their DOM. The only downside is that it doesn't support JavaScript\u2014but for that, you can use <a href=\"https:\/\/jauntium.com\/index.htm\" target=\"_blank\" >Jauntium<\/a>, a Java browser automation framework developed and maintained by the same person behind Jaunt, Tom Cervenka.<\/p>"},{"title":"How to find all URLs on a domain's website (multiple methods)","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-find-all-urls-on-a-domains-website-multiple-methods\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-find-all-urls-on-a-domains-website-multiple-methods\/","description":"<p>Finding all the URLs on a website is one of the most vital tasks in any web-scraping workflow. In this tutorial, we'll walk through multiple ways to find all URLs on a domain: from using Google search tricks, to exploring pro-level SEO tools like ScreamingFrog, and even crafting a Python script to pull URLs at scale from a sitemap. Don't worry, we've got you covered on building a clean list of URLs to scrape (and as a bonus, we'll even show you how to grab some data along the way).<\/p>"},{"title":"How to find elements by CSS selector in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-find-elements-css-selector-selenium\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-find-elements-css-selector-selenium\/","description":"<p>Selenium is a popular browser automation framework that is also used for scraping data using headless browsers. While using Selenium, one of the most popular things to do is use CSS selectors to select particular HTML elements to interact with or extract data from.<\/p>\n<h2 id=\"using-browser-developer-tools-to-find-css-selectors\">Using Browser Developer Tools To Find CSS Selectors<\/h2>\n<p>To scrape content or fill in forms using Selenium, we first need to know the CSS selector of the HTML element we'll be working with. To find the CSS selector, we need to go through the HTML structure of the web page, which could be confusing and cumbersome. Most modern browsers provide developer tools to make this easier.<\/p>"},{"title":"How to parse a JSON file in JavaScript?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/how-to-parse-a-json-file-in-javascript\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/how-to-parse-a-json-file-in-javascript\/","description":"<h2 id=\"what-is-json-and-why-parse-it\">What Is JSON And Why Parse It?<\/h2>\n<p>JSON stands for &quot;JavaScript Object Notation&quot;. It's one of the most popular formats used for storing and sharing data containing key-value pairs, which may also be nested or in a list. For many applications that work with data, including web scraping, it is important to be able to write and parse data in the JSON format.<\/p>\n<p>Here is a sample JSON string:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-json\" data-lang=\"json\"><span style=\"display:flex;\"><span>{\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;name&#34;<\/span>: <span style=\"color:#e6db74\">&#34;John Doe&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;age&#34;<\/span>: <span style=\"color:#ae81ff\">32<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;address&#34;<\/span>: {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;street&#34;<\/span>: <span style=\"color:#e6db74\">&#34;123 Main St&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;city&#34;<\/span>: <span style=\"color:#e6db74\">&#34;Anytown&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;state&#34;<\/span>: <span style=\"color:#e6db74\">&#34;CA&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> }\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"how-to-read-a-json-file\">How To Read A JSON File?<\/h2>\n<p>In JavaScript, you can parse a JSON string using the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/JavaScript\/Reference\/Global_Objects\/JSON\/parse\" target=\"_blank\" ><code>JSON.parse()<\/code><\/a> method. A JSON file is essentially a text file containing a JSON string. Therefore, to read a JSON file, you first need to read the file as a string and then parse it into an object that contains key-value pairs.<\/p>"},{"title":"How to Scrape Costco: Complete Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-costco\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-costco\/","description":"<p>Learning how to scrape Costco can be incredibly valuable for gathering product information, monitoring prices, or conducting market research. In my experience, while there are several approaches to utilize coding tools for scraping Costco's website, our robust HTML API offers the most straightforward solution that handles JavaScript rendering, proxy rotation, and other key elements that tend to overcomplicate data extraction.<\/p>\n<p>In this guide, we will cover how you can extract data from retailers like Costco without getting blocked, dealing with JavaScript rendering, or managing proxies. Let's take a closer look at how you can use our powerful ScrapingBee HTML API with minimal coding knowledge and extract Costco's product data<\/p>"},{"title":"How to Scrape eBay: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-ebay\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-ebay\/","description":"<p>Learning how to scrape data from eBay efficiently requires the right tools and techniques. eBay\u2019s complex structure and anti-scraping measures make it challenging to extract data reliably.<\/p>\n<p>In this guide, I\u2019ll walk you through the entire process of setting up and running an eBay scraper that actually works. Whether you\u2019re tracking prices, researching products, or gathering seller data, you\u2019ll discover how to extract the information you need without getting blocked<\/p>"},{"title":"How to Scrape Expedia: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-expedia\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-expedia\/","description":"<p>Expedia scraping is a great strategy for tracking of hotel prices, travel trends, and comparison of deals with real-time data. It\u2019s especially useful for building tools that rely on dynamic hotel details like location, rating, and pricing strategies, but accessing these platforms is a lot harder with automated tools.<\/p>\n<p>The main challenge is that Expedia loads its content using JavaScript, so simple scrapers can\u2019t see the hotel listings without rendering the page. On top of that, the site often changes its layout and uses anti-bot measures like IP blocking.<\/p>"},{"title":"How to Scrape Google Play: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-play\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-play\/","description":"<p>Want to extract app names, ratings, reviews, and install counts from Google Play? Scraping is one of the fastest ways to collect valuable mobile app data from Google Play, but dynamic content and anti-bot systems make traditional scrapers unreliable<\/p>\n<p>In this guide, we will teach you to scrape Google Play using Python and our beloved ScrapingBee's <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a>. Here you will find the basic necessities for your collection goals, helping you export data in clean, structured formats. Let\u2019s make scraping simple and scalable!<\/p>"},{"title":"How to Scrape Google Scholar with Python: A ScrapingBee Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-scholar\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-scholar\/","description":"<p>Did you know that learning how to scrape Google Scholar can supercharge your research papers? This search engine is a gold mine of citations and scholarly articles that you could be analyzing at scale with a web scraper. With a reliable scraping service like ScrapingBee and some basic Python, you can automate repetitive research tasks more efficiently.<\/p>\n<p>Why ScrapingBee, you may ask? Well, let\u2019s get one thing straight \u2013 Google Scholar has tight anti-scraping measures. It means that you need a reliable Google Scholar scraper that can handle IP bans, annoying CAPTCHAs, and <a href=\"https:\/\/www.scrapingbee.com\/features\/javascript-scenario\/\" target=\"_blank\" >JavaScript rendering<\/a>. Our web scraper is built with all these features, allowing you to scrape Google Scholar data without coding everything from scratch.<\/p>"},{"title":"How to scrape Google search results data in Python easily","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-search-results-data-in-python-easily\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-search-results-data-in-python-easily\/","description":"<p><strong>Google search engine results pages (SERPs)<\/strong> can provide alot of important data for you and your business but you most likely wouldn't want to scrape it manually. After all, there might be multiple queries you're interested in, and the corresponding results should be monitored on a regular basis. This is where automated scraping comes into play: you write a script that processes the results for you or use a dedicated tool to do all the heavy lifting.<\/p>"},{"title":"How to Scrape Home Depot: Complete Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-homedepot\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-homedepot\/","description":"<p>Scraping Home Depot\u2019s product data requires handling JavaScript rendering and potential anti-bot measures. With ScrapingBee\u2019s API, you can extract product information from Home Depot without managing headless browsers, proxies, or CAPTCHAs<\/p>\n<p>Simply set up a request with JavaScript rendering enabled, target the correct URLs, and extract structured data using your preferred HTML parser. Our API handles all the complex parts of web scraping, letting you focus on using the data. In this guide, we will explain how you can do the same, working with Python and our prolific ScrapingBee API!<\/p>"},{"title":"How to use a proxy with Python Requests?","link":"https:\/\/www.scrapingbee.com\/blog\/python-requests-proxy\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-requests-proxy\/","description":"<p>If you've ever messed around with scraping or automating requests in Python, you've probably run into the usual roadblocks. One minute everything's smooth, the next you're getting captchas, random 403 errors, or just radio silence from the site. That's usually the internet's polite way of saying: <em>&quot;Hey buddy, slow down.&quot;<\/em> This is where proxies save the day. Setting up a Python Requests proxy, you can mask your real IP, spread your traffic over different addresses, and even slip past geo-restrictions that would normally block you.<\/p>"},{"title":"How to use CSS Selectors in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/css_selectors\/how-to-use-css-selectors-in-python\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/css_selectors\/how-to-use-css-selectors-in-python\/","description":"<h2 id=\"what-are-css-selectors\">What Are CSS Selectors?<\/h2>\n<p>CSS selectors are patterns that are used to reference HTML elements, primarily for the purpose of styling them using CSS. Over the years, they've evolved into one of the key ways to select and manipulate HTML elements using in-browser JavaScript and other programming languages such as Python.<\/p>\n<h2 id=\"why-use-css-selectors-in-python\">Why Use CSS Selectors in Python?<\/h2>\n<p>In Python, CSS selectors are primarily used to select one or more HTML elements while working with web pages, usually for scraping and browser automation.<\/p>"},{"title":"How to wait for page to load in Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-wait-for-page-to-load-in-playwright\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-wait-for-page-to-load-in-playwright\/","description":"<p>Websites that render using JavaScript work in many different ways. Hence, waiting for the page to load might mean different things based on what we're looking to do. Sometimes the elements we need will appear on the first render, sometimes an app shell will load first and then the content. Sometimes we may even have to interact (click or scroll). Let's look at the different methods to wait in Playwright, so you can use the one that best works for your task.<\/p>"},{"title":"How to web scrape Zillow\u2019s real estate data at scale","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-web-scrape-zillows-real-estate-data-at-scale-with-this-easy-zillow-scraper-in-python\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-web-scrape-zillows-real-estate-data-at-scale-with-this-easy-zillow-scraper-in-python\/","description":"<p>If you're looking to buy or sell a house or other real estate property, Zillow is an excellent resource with <a href=\"https:\/\/www.similarweb.com\/website\/zillow.com\/#overview\" target=\"_blank\" >millions<\/a> of property listings and detailed market data.<\/p>\n<p>In addition to traditional real estate purposes, the data available on Zillow comes in handy for market analysis, tracking housing trends, or building a real estate application.<\/p>\n<p>This tutorial will guide you to effectively scrape Zillow's real estate data at scale using Python, BeautifulSoup, and the ScrapingBee API.<\/p>"},{"title":"Puppeteer Web Scraping Tutorial in Nodejs","link":"https:\/\/www.scrapingbee.com\/blog\/puppeteer-web-scraping-tutorial-in-nodejs\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/puppeteer-web-scraping-tutorial-in-nodejs\/","description":"<p>In this tutorial, we are going to take a look at <a href=\"https:\/\/pptr.dev\" target=\"_blank\" >Puppeteer<\/a>, a JavaScript library developed by Google. Puppeteer provides a native automation interface for Chrome and Firefox, allowing you to launch a headless browser instance and take full control of websites, including taking screenshots, submitting forms, extracting data, and more. Let's dive right in with a real-world example. \ud83e\udd3f<\/p>\n<blockquote>\n<p>\ud83d\udca1 If you are curious about the basics of web scraping in JavaScript, you may be also interested in <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/\" >Web Scraping with JavaScript and Node.js<\/a>.<\/p>"},{"title":"Web Scraping With LangChain & ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/langchain-web-scraper\/","pubDate":"Tue, 20 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/langchain-web-scraper\/","description":"<p>Having a Langchain scraper enables developers to build powerful data pipelines that start with real-time data extraction and end with structured outputs for tasks, like embeddings and retrieval-augmented generation (RAG). To accommodate these benefits, our HTML API simplifies the road towards desired public content via JavaScript rendering, anti-bot bypassing, and content cleanup\u2014so LangChain can process the result into usable text.<\/p>\n<p>In this guide, we will cover the steps and integration details that will help us combine LangChain with our Python SDK, combining these two tools in a Python project. Let's get straight to it!<\/p>"},{"title":"Python wget: Automate file downloads with 3 simple commands","link":"https:\/\/www.scrapingbee.com\/blog\/python-wget\/","pubDate":"Mon, 19 Jan 2026 09:10:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-wget\/","description":"<p>If you've ever needed to grab files in bulk, you know the pain of clicking download links one by one. That's where combining <strong>Python and wget<\/strong> shines. Instead of re-implementing HTTP requests yourself, you can call the battle-tested <code>wget<\/code> tool straight from a Python script and let it handle the heavy lifting.<\/p>\n<p>In this guide, we'll set up <code>wget<\/code>, explain how to run it from Python using subprocess, and walk through three copy-paste commands that cover almost everything you'll ever need: downloading a file, saving it with a custom name or folder, and resuming interrupted transfers. Let's get started!<\/p>"},{"title":"Best Google Scholar API Alternatives - Get Ready for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-google-scholar-api\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-google-scholar-api\/","description":"<p>In the ever-evolving landscape of academic research and data analysis, Google Scholar is a cornerstone for research data. It's the go-to place for scholarly articles, tracking citations, and identifying research trends. Yet, there's a significant challenge: the absence of an official Google Scholar API. This void leaves developers and researchers scrambling for Google Scholar API alternatives.<\/p>\n<p>Whether you\u2019re a developer seeking flexible scraping solutions or a researcher in pursuit of structured academic metadata, this article is your compass. I'll introduce you to the best API for Google Scholar research data and list the alternatives. Let's get started!<\/p>"},{"title":"Block ressources with Puppeteer","link":"https:\/\/www.scrapingbee.com\/blog\/block-requests-puppeteer\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/block-requests-puppeteer\/","description":"<p>In this article, we will take a look at how to block specific resources (HTTP requests, CSS, video, images) from loading in Puppeteer. Puppeteer is one of the most widely used tools for web scraping and automation. There are a couple of ways to block resources in Puppeteer. In this article, we will go over all the various methods we can use to block\/intercept specific network requests in our automation scripts.<\/p>"},{"title":"How to Build a News Crawler with the ScrapingBee API","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-build-a-news-crawler-with-the-scrapingbee-api\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-build-a-news-crawler-with-the-scrapingbee-api\/","description":"<p>Imagine you're a developer who needs to keep track of the latest news from multiple sources for a project you're working on. Instead of manually visiting each news website and checking for updates, you want to automate this process to save time and effort. You need a <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/google-news-scraper-api\/\" target=\"_blank\" >news crawler<\/a>.<\/p>\n<p>In this article, you'll see how easy it can be to build a news crawler using Python Flask and the <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee API<\/a>. You'll learn how to set up ScrapingBee, implement crawling logic, and display the extracted news on a web page.<\/p>"},{"title":"How to Scrape Google Flights with Python and ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-flights\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-flights\/","description":"<p>As the the key source of information on the internet, Google contains a lot of valuable public data. Just like with most industries, for many, it is the main source for tracking flight prices plus departure and arrival locations for trips.<\/p>\n<p>As you already know, automation plays a vital role here, as everyone wants an optimal setup to compare multiple airlines and their pricing strategies to save money. Even better, collecting data with your own Google Flights scraper saves a lot of time and provides a consistent access to new deals.<\/p>"},{"title":"How To Set Up a Rotating Proxy in Puppeteer","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-set-up-a-rotating-proxy-in-puppeteer\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-set-up-a-rotating-proxy-in-puppeteer\/","description":"<p><a href=\"https:\/\/www.npmjs.com\/package\/puppeteer\" target=\"_blank\" >Puppeteer<\/a> is a popular headless browser used with Node.js for web scraping. However, even with Puppeteer, your IP can get blocked if your script is identified as a bot. That's where the Puppeteer proxy comes in.<\/p>\n<p>A proxy acts as a middleman between the client and server. When a client makes a request through a proxy, the proxy forwards it to the server. This makes detecting and blocking your IP harder for the target site.<\/p>"},{"title":"Playwright for Python Web Scraping Tutorial with Examples","link":"https:\/\/www.scrapingbee.com\/blog\/playwright-for-python-web-scraping\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/playwright-for-python-web-scraping\/","description":"<p>Web scraping is a powerful tool for gathering data from websites, and Playwright is one of the best tools out there to get the job done. In this tutorial, I'll walk you through <strong>how to scrape with Playwright for Python<\/strong>. We'll start with the basics and gradually move to more advanced techniques, ensuring you have a solid grasp of the entire process. Whether you're new to web scraping or looking to refine your skills, this guide will help you use Playwright for Python effectively to extract data from the web.<\/p>"},{"title":"Playwright vs Selenium: Which is the best Headless Browser","link":"https:\/\/www.scrapingbee.com\/blog\/playwright-vs-selenium\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/playwright-vs-selenium\/","description":"<p>For years Selenium has reigned as the undisputed champion of web automation, dominating the ring with its vast capabilities and developer loyalty. But now a formidable rival has risen, Playwright. This battle of the titans is set to determine which tool truly deserves the crown of web automation champion. Each contender brings its own unique strengths and strategies to the arena, but which will emerge victorious in the fight for web automation supremacy?<\/p>"},{"title":"Price Scraper With ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/price-scraper\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/price-scraper\/","description":"<p>Building a multi-functional price scraper is one of the best ways to extract data from competitor platforms and study their pricing strategies. Because most e-commerce businesses use automated connections for competitive analysis, finding a reliable way to access website data and study market trends is one of the best ways to outshine competitors.<\/p>\n<p>However, researching and analyzing data takes a lot of time, so having the best tools for scraping prices provides a big advantage. In this guide, we will show you how to access web data and start scraping websites with our intuitive HTML API. Stick around to build your first price scraping tool in just a few minutes!<\/p>"},{"title":"ScrapingBee is joining Oxylabs\u2019 group","link":"https:\/\/www.scrapingbee.com\/blog\/scrapingbee-acquisition\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapingbee-acquisition\/","description":"<p>Today, we\u2019re incredibly proud and excited to announce that ScrapingBee has officially become part of Oxylabs\u2019 group.<\/p>\n<p>Oxylabs\u2019 company group already offers a variety of industry-leading proxy and data gathering solutions. Through this acquisition, they aim to strengthen their position as a market leader while helping elevate the <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> industry as a whole.<\/p>\n<p>At ScrapingBee, our mission has always been to offer a transparent, easy-to-use, and high-performance web scraping solution.<\/p>"},{"title":"Using wget with a proxy","link":"https:\/\/www.scrapingbee.com\/blog\/wget-proxy\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/wget-proxy\/","description":"<p>Using <strong>wget proxy<\/strong> setups is pretty simple once you know the basics. In this guide, we'll walk through how to make wget use a proxy server, so you can grab files or send requests even when you're behind a corporate firewall or just want extra privacy.<\/p>\n<p>Nothing fancy \u2014 just clear steps and examples you can actually use.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAIAAADwazoUAAAAxElEQVR4nKyRQUuFQBRGZ&#43;5oBvEerjTCVRD0\/\/9HEVEKFS0KpphGsham43i\/dulihJR3lsMczoUvAiC2QpvNgAyIpVvADMai7H76p&#43;vKvH7MH\/3gX&#43;5K17nyVhv9ySMHZEB8mfejY&#43;WH0er6Lx\/FUZpn1dWjtq21bbgMhu\/adO93SYN&#43;KgCoG36zPrl\/KPITUhSQScn9aeGGuHOxSnbTJwhrvs8vsuLyrL55npflf6YCQ5IUQjAzEa2Tlzjozqv4DQAA\/\/\/RvWuw7TP8LAAAAABJRU5ErkJggg==); background-size: cover\">\n <svg width=\"460\" height=\"250\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/wget-proxy\/cover.png 460 '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/wget-proxy\/cover.png\"\n width=\"460\" height=\"250\"\n alt='Using wget with a proxy'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/wget-proxy\/cover.png 460'\n src=\"https:\/\/www.scrapingbee.com\/blog\/wget-proxy\/cover.png\"\n width=\"460\" height=\"250\"\n alt='Using wget with a proxy'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"quick-answer\">Quick answer<\/h2>\n<p>You can make wget use a proxy server using command flags (<code>-e use_proxy=yes<\/code>), config files like <code>.wgetrc<\/code>, or environment variables. For production-grade rotation and geolocation, the easiest path is <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee's API<\/a> or <a href=\"https:\/\/www.scrapingbee.com\/documentation\/proxy-mode\/\" target=\"_blank\" >proxy mode<\/a> \u2014 one clean command, zero maintenance.<\/p>"},{"title":"Web Scraping with Objective C","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-objective-c\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-objective-c\/","description":"<p>In this article, you\u2019ll learn about the main tools and techniques for <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> using Objective C for both static and dynamic web pages.<\/p>\n<p>This article assumes that you\u2019re already familiar with Objective C and <a href=\"https:\/\/developer.apple.com\/documentation\/xcode\" target=\"_blank\" >XCode<\/a>, which will be used to create, compile, and run the projects on a macOS\u2014though you can easily change things to run on iOS if preferred.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAAyklEQVR4nKyRzUrEMBzE\/0n6ZWitCr2KXkSLd0F8fxDBV\/Ag4qVt0hpJmo\/NsrC0pdtLy84tk\/mRYRJ472Gr8GZyAe6VbmtxmjPGOOdmZjA9NJX4\/eEAQLMkjI5X1lrGW8Z49yfu725vrq8QQnPYGvf13fmEGtb63TiEde7947NhrO&#43;1lPLt9WWhNiH4UE5pCAOlzOAncVw&#43;PRBM8vzyuXwcngUANF37XyjOZBiRosgQHkNV3RCCtTaUXmRpugyv1Vm\/apX2AQAA\/\/84slbCVoM2VAAAAABJRU5ErkJggg==); background-size: cover\">\n <svg width=\"1200\" height=\"628\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/web-scraping-with-objective-c\/cover_hu7998418013526718218.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-objective-c\/cover_hu7998418013526718218.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/web-scraping-with-objective-c\/cover_hu7998418013526718218.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-objective-c\/cover.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"basic-scraping\">Basic Scraping<\/h2>\n<p>First, let\u2019s take a look at using Objective C to scrape a static web page from <a href=\"https:\/\/en.wikipedia.org\/wiki\/Physics\" target=\"_blank\" >Wikipedia<\/a>:<\/p>"},{"title":"Web Scraping with Scala - Easily Scrape and Parse HTML","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-scala\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-scala\/","description":"<p>This tutorial explains how to use three technologies for <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> with Scala. The article first explains how to scrape a static HTML page with Scala using <a href=\"https:\/\/www.scrapingbee.com\/blog\/java-parse-html-jsoup\/\" target=\"_blank\" >jsoup<\/a> and <a href=\"https:\/\/index.scala-lang.org\/ruippeixotog\/scala-scraper\" target=\"_blank\" >Scala Scraper<\/a>. Then, it explains how to scrape a dynamic HTML website with Scala using <a href=\"https:\/\/www.selenium.dev\/\" target=\"_blank\" >Selenium<\/a>.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAAy0lEQVR4nGL5\/\/8\/A7mAiWydWDT\/\/f3394\/fmOp&#43;\/\/799&#43;9fNEEWhPTP3482nPj08IWkh6mEngJc\/M&#43;fP&#43;8\/fHz3\/sPnz58V5eWEhAQZGRnRNf\/7&#43;&#43;\/j\/RffXr\/99&#43;8fsvF\/\/v49cersu\/cffvz8&#43;e3bD1trc&#43;zO\/vv\/\/59\/DExMjMiCHOzs2prqzMxMAvx8utoacGsZGBgY4aH95&#43;ef16dusvNz86lJs3CwIut\/8\/YdMxPTr9&#43;\/uTg5eXl5sGgmA1A1qkgCgAAAAP\/\/SHtWNrz7kB4AAAAASUVORK5CYII=); background-size: cover\">\n <svg width=\"1200\" height=\"628\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/web-scraping-scala\/cover_hu18062781086090364684.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-scala\/cover_hu18062781086090364684.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/web-scraping-scala\/cover_hu18062781086090364684.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-scala\/cover.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<blockquote>\n<p>\ud83d\udca1 Interested in web scraping with Java? Check out our guide to the <a href=\"https:\/\/www.scrapingbee.com\/blog\/best-java-web-scraping-libraries\/\" >best Java web scraping libraries<\/a><\/p>"},{"title":"What is Screen Scraping and How To Do It With Examples","link":"https:\/\/www.scrapingbee.com\/blog\/screen-scraping-with-scrapingbee\/","pubDate":"Mon, 19 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/screen-scraping-with-scrapingbee\/","description":"<h2 id=\"what-is-screen-scraping\">What is Screen Scraping?<\/h2>\n<p>The easiest way to get data from another program is to use a dedicated API (Application Programming Interface), but not all programs provide one. In fact, most programs don't.<\/p>\n<p>If there's no API provided, you can still get data from a program by using screen scraping, which is the process of capturing data from the screen output of a program.<\/p>\n<p>This can take all kinds of forms, ranging from parsing terminal output to reading text off screenshots, with the most common being classic web scraping.<\/p>"},{"title":"Generating Random IPs to Use for Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/generating-random-ips-to-use-for-scraping\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/generating-random-ips-to-use-for-scraping\/","description":"<p>Web scraping uses automated software tools or scripts to extract and parse data from websites into structured formats for storage or processing. Many data-driven initiatives\u2014including business intelligence, sentiment analysis, and predictive analytics\u2014rely on web scraping as a method for gathering information.<\/p>\n<p>However, some websites have implemented anti-scraping measures as a precaution against the misuse of content and breaches of privacy. One such measure is IP blocking, where IPs with known bot patterns or activities are automatically blocked. Another tactic is rate limiting, which restricts the volume of requests that a single IP address can make within a specific time frame.<\/p>"},{"title":"Getting Started with HtmlUnit","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-htmlunit\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-htmlunit\/","description":"<p><a href=\"https:\/\/sourceforge.net\/projects\/htmlunit\/\" target=\"_blank\" >HtmlUnit<\/a> is a GUI-less browser for Java that can execute JavaScript and perform AJAX calls.<\/p>\n<p>Although primarily used to automate testing, HtmlUnit is a great choice for scraping static and dynamic pages alike because of its ability to manipulate web pages on a high level, such as clicking on buttons, submitting forms, providing input, and so forth. HtmlUnit supports the W3C DOM standard, <a href=\"https:\/\/www.scrapingbee.com\/blog\/using-css-selectors-for-web-scraping\/\" >CSS selectors<\/a>, and <a href=\"https:\/\/www.scrapingbee.com\/blog\/practical-xpath-for-web-scraping\/\" >XPath selectors<\/a>, and it can simulate the Firefox, Chrome, and Internet Explorer browsers, which makes <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> easier.<\/p>"},{"title":"Guide to Choosing a Proxy API for Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/guide-to-choosing-a-proxy-for-scraping\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/guide-to-choosing-a-proxy-for-scraping\/","description":"<p>You're in the thick of it, scraping the web to extract data pivotal to your core product. During this process, you quickly realize that websites deploy defense mechanisms against potential scrapers. For instance, if your server IP address keeps hitting a site for data, it might get flagged and subsequently banned.<\/p>\n<p>This is where a proxy API can help. A proxy API is like your Swiss Army knife for web scraping. It's designed to make your web scraping operations seamless, efficient, and, most importantly, undetected.<\/p>"},{"title":"Guide to Puppeteer Scraping for Efficient Data Extraction","link":"https:\/\/www.scrapingbee.com\/blog\/puppeteer-scraping\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/puppeteer-scraping\/","description":"<p>Puppeteer scraping lets you automate real browsers to open tabs, visit desired web pages, and extract public data. But how do you use this Node.js library without prior experience?<\/p>\n<p>In this guide, we will show you how to set up Puppeteer, navigate pages, extract data with $eval\/$$eval\/XPath, paginate, and export results. You\u2019ll also see where Puppeteer hits limits at scale and how our HTML API unlocks consistent access to protected websites with the ability to rotate IP addresses and bypass anti-bot systems. Stay tuned, and you will have a working Puppeteer scraper in just a few minutes!<\/p>"},{"title":"How to master Selenium web scraping in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/selenium-python\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/selenium-python\/","description":"<p><strong>Selenium web scraping<\/strong> is still one of the most dependable ways to extract data from dynamic, JavaScript-heavy websites. In 2026, it's smoother and faster than ever.<\/p>\n<p>Selenium is a browser automation toolkit with bindings for all major programming languages, including Python, which we'll focus on here. It talks to browsers through the WebDriver protocol, giving you control over Chrome, Firefox, Safari, or even remote setups. Originally built for testing, Selenium has grown into a full automation tool that can click, type, scroll, and extract data just like a real user.<\/p>"},{"title":"How to Scrape Google Images: A Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-images\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-images\/","description":"<p>Welcome to a guide on how to scrape Google images. We\u2019ll dive into the exact process of extracting image URLs, titles, and source links from Google Images search results. By the end of this guide, you'll be able to get all the image data from multiple search pages.<\/p>\n<p>Here's the catch, though: to scrape data, you'll need a reliable tool, such as ScrapingBee. Our <a href=\"https:\/\/www.scrapingbee.com\/features\/google\/\" target=\"_blank\" >Google Search Results API<\/a> gives you the infrastructure needed to handle Google\u2019s protections. Since Google Images implements strong anti-scraping measures, you won't be able to get images without a strong infrastructure.<\/p>"},{"title":"Scrapegraph AI Tutorial; Scrape websites easily with LLaMA AI","link":"https:\/\/www.scrapingbee.com\/blog\/scrapegraph-ai-tutorial-scrape-websites-easily-with-llama-ai\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapegraph-ai-tutorial-scrape-websites-easily-with-llama-ai\/","description":"<p><strong>Artifical intelligence<\/strong> is everywhere in tech these days, and it's wild how it's become a go-to tool, for example, in stuff like web scraping. Let's dive into how Scrapegraph AI can totally simplify your scraping game. Just tell it what you need in simple English, and watch it work its magic.<\/p>\n<p>I'm going to show you how to get Scrapegraph AI up and running, how to set up a language model, how to process JSON, scrape websites, use different AI models, and even turning your data into audio. Sounds like a lot, but it's easier than you think, and I'll walk you through it step by step.<\/p>"},{"title":"Top Instant Data Scraper Tools & Extensions in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/instant-data-scraper\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/instant-data-scraper\/","description":"<p>These days, data is everything, and the ability to extract information instantly from websites has become an indispensable asset. The best instant data scraper tool can be used for market research, keeping a vigilant eye on competitors, or harnessing real-time insights.<\/p>\n<p>But there's the truth: not all of the data scrapers are made equal. If you want to gather the information without delay, you need to pick the right scraper and pair it with the best web scraping extension.<\/p>"},{"title":"Web Scraping Booking.com","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-booking\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-booking\/","description":"<p>With more than 28 million listings, Booking.com is one of the biggest websites to look for a place to stay during your trip. If you are opening up a new hotel in an area, you might want to keep tabs on your competition and get notified when new properties open up. This can all be automated with the power of web scraping! In this article, you will learn how to scrape data from the search results page of Booking.com using Python and Selenium and also handle pagination along the way.<\/p>"},{"title":"Web Scraping Handling Ajax Website","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-handling-ajax-website\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-handling-ajax-website\/","description":"<p>Today more and more websites are using Ajax for fancy user experiences, dynamic web pages, and many more good reasons.\nCrawling Ajax heavy website can be tricky and painful, we are going to see some tricks to make it easier.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAAtElEQVR4nKRRXauCQBTcc1wvLl4QuQ9XK\/z\/\/6cepReJSgxFhfVrz&#43;khiCU2CJu3M8wwwxnJzGItcLXTYTaGid50IXohpH10dWuaikkE\/6n6VU9ed\/0hv47hXxZCto0cycZQX5XerfyZmvp8sdNPlS5m1U08LOSuDQKYxP7Y5EWLiGCJQPoSvWGc9WTctdGDKEl3iwDAeJPYIiUhlqQXDv3A5uHTqR7fQlxlduGrne8BAAD\/\/wYiTOstizjlAAAAAElFTkSuQmCC); background-size: cover\">\n <svg width=\"1349\" height=\"674\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/web-scraping-handling-ajax-website\/cover_hu8880260185297834566.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-handling-ajax-website\/cover_hu8880260185297834566.png\"\n width=\"1349\" height=\"674\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/web-scraping-handling-ajax-website\/cover_hu8880260185297834566.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-handling-ajax-website\/cover.png\"\n width=\"1349\" height=\"674\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"prerequisite\">Prerequisite<\/h2>\n<p>Before starting, please read the previous articles I wrote to understand how to set up your Java environment, and have a basic understanding of HtmlUnit <a href=\"https:\/\/ksah.in\/introduction-to-web-scraping-with-java\/\" target=\"_blank\" >Introduction to Web Scraping With Java<\/a> and <a href=\"https:\/\/ksah.in\/how-to-log-in-to-almost-any-websites\/\" target=\"_blank\" >Handling Authentication<\/a>.\nAfter reading this you should be a little bit more familiar with web scraping.<\/p>"},{"title":"What to Do If Your IP Gets Banned While You're Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/what-to-do-if-your-ip-gets-banned-while-youre-scraping\/","pubDate":"Sun, 18 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/what-to-do-if-your-ip-gets-banned-while-youre-scraping\/","description":"<p>Web scraping is valuable for gathering information, studying markets, and understanding competition. But web scrapers often run into a problem: getting banned from websites.<\/p>\n<p>In most cases, it happens because the scrapers violated the website's terms of service (ToS) or generate so much traffic that they abuse the website's resources and prevent normal functioning. To protect itself, the website bans your IP from accessing its resources either temporarily or permanently.<\/p>"},{"title":"Best Cloud-Based Web Scraping Tools and APIs","link":"https:\/\/www.scrapingbee.com\/blog\/cloud-based-web-scraper\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/cloud-based-web-scraper\/","description":"<p>If you\u2019ve ever wrestled with the challenges of managing proxies, setting up headless browsers, or scaling your scraping infrastructure, you know how complex web scraping can get. That\u2019s why cloud-based web scraping tools are so useful.<\/p>\n<p>These platforms do the heavy work for you, by managing infrastructure, proxies, browser automation, and more. They allow you to focus on extracting the data you actually need.<\/p>\n<p>In this article, we\u2019ll dive into the best cloud web scraper options available today, helping you find the right fit for your projects, whether you\u2019re a developer or a business user. I will also explain why I use the specific <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Scraper API<\/a> in several projects where I needed reliable JavaScript rendering and proxy rotation without the hassle of managing servers. Let's dive in!<\/p>"},{"title":"Best Real Estate Databases & Market Data Providers","link":"https:\/\/www.scrapingbee.com\/blog\/top-real-estate-data-providers\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/top-real-estate-data-providers\/","description":"<p>If you want to stay competitive in today's market, you need access to an accurate, up-to-date database for real estate. Whether you\u2019re a real estate investor, a market analyst, or a lead generation specialist, the quality of your data can make or break your decisions.<\/p>\n<p>In this guide, I'll dive into the\u00a0best real estate data providers and explain why real estate data matters. Then, I'll describe the differences between data providers and data extraction tools. By the end of this article, you'll know exactly how ScrapingBee can assist you in extracting data from platforms that don\u2019t offer APIs. Let's start!<\/p>"},{"title":"BrowserUse: How to use AI Browser Automation to Scrape","link":"https:\/\/www.scrapingbee.com\/blog\/browseruse-how-to-use-ai-browser-automation-to-scrape\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/browseruse-how-to-use-ai-browser-automation-to-scrape\/","description":"<p>AI agents, AI agents everywhere. This is one of the most popular and quickly evolving technologies out there. I'm not sure about you, but to me it seems like everyone is trying to use AI for literally everything: collecting data, writing letters, booking hotels, and even shopping. While I still prefer doing many of these things manually, automating boring tasks seems really tempting. Thus, in this article, we're going to see how to automate browser interactions with the help of <strong>BrowserUse<\/strong>.<\/p>"},{"title":"Dynamic Web Page Scraping With Python","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-dynamic-content\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-dynamic-content\/","description":"<p>Modern websites love to render content in the browser through dynamic and interactive JavaScript elements. However, because of that, static scrapers and parsers that work so well with Python become ineffective as they miss prices, reviews, and stock states that appear after client-side rendering. If part of your workflow involves extracting customer feedback from Amazon, our <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/amazon-review-api\/\" target=\"_blank\" >Amazon Review Scraper API<\/a> can help you automatically collect review data at scale.<\/p>\n<p>As a necessary addition to reach the desired information, the new iteration of data collection tools tries to capture dynamic web scraping with Python through headless browsers for clicking on JavaScript elements on the site. However, even then, mimicking real user behavior and customizing the connection until it opens access to our data source requires a lot of technical proficiency, even with tools like Selenium or Puppeteer.<\/p>"},{"title":"Extract Job Listings, Details and Salaries from Indeed with ScrapingBee and Make.com","link":"https:\/\/www.scrapingbee.com\/blog\/no-code-job-data-extraction\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/no-code-job-data-extraction\/","description":"<p>Taking the time to read through target pages is usually not the best idea. It's too time-consuming and it's easy to miss important changes when you're scrolling through hundreds of pages. Therefore, learning how to perform updates automatically without the need for coding skills is crucial.<\/p>\n<p>In this tutorial, we will scrape jobs from <a href=\"http:\/\/indeed.com\/\" target=\"_blank\" >indeed.com<\/a>, one of the most popular job aggregator websites. Web scraping is an excellent tool for finding valuable information from a job listing database.<\/p>"},{"title":"How to Parse HTML with Regex","link":"https:\/\/www.scrapingbee.com\/blog\/parse-html-regex\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/parse-html-regex\/","description":"<p>The amount of information available on the internet for human consumption is <a href=\"https:\/\/siteefy.com\/how-many-websites-are-there\/\" target=\"_blank\" >astounding<\/a>. However, if this data doesn't come in the form of a specialized REST API, it can be challenging to access programmatically. The technique of gathering and processing raw data from the internet is known as <em>web scraping<\/em>. There are several uses for web scraping in software development. Data collected through web scraping can be applied in market research, lead generation\u200d, competitive intelligence, product pricing comparison, monitoring consumer sentiment, brand audits, AI and machine learning, creating a job board, and more.<\/p>"},{"title":"How to read and parse JSON data with Python","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-read-and-parse-json-data-with-python\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-read-and-parse-json-data-with-python\/","description":"<p>JSON, or JavaScript Object Notation, is a popular data interchange format that has become a staple in modern web development. If you're a programmer, chances are you've come across JSON in one form or another. It's widely used in REST APIs, single-page applications, and other modern web technologies to transmit data between a server and a client, or between different parts of a client-side application. JSON is lightweight, easy to read, and simple to use, making it an ideal choice for developers looking to transmit data quickly and efficiently.<\/p>"},{"title":"How to scrape channel data from YouTube","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-youtube\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-youtube\/","description":"<p>If you are an internet user, it is safe to assume that you are no stranger to YouTube. It is the hub for videos on internet and even back in 2020, 500 hours of videos were being uploaded to YouTube every minute! This has led to the accumulation of a ton of useful data on the platform. You can extract and make use of some of this data via the <a href=\"https:\/\/developers.google.com\/youtube\/v3\" target=\"_blank\" >official YouTube API<\/a> but it is rate limited and doesn't contain all the data viewable on the website. In this tutorial, you will learn how you can scrape YouTube data using Selenium. This tutorial will specifically focus on extracting information about videos uploaded by a channel but the techniques are easily transferrable to extracting search results and individual video data.<\/p>"},{"title":"How to Scrape Craigslist: Step-by-Step Tutorial","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-craigslist\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-craigslist\/","description":"<p>Have you ever tried learning how to scrape Craigslist and run into a wall of CAPTCHAs and IP blocks? Trust me, my first <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> attempt was just as rocky.<\/p>\n<p>Craigslist is a gold mine of data. It contains everything from job ads, housing, items for sale, to various services. But it's not an easy nut to crack for beginners in scraping.<\/p>\n<p>Just like in any other web scraping project, you won't get anywhere without proxy rotation, JavaScript rendering, and solving CAPTCHAs. Fortunately, ScrapingBee handles all of it on autopilot. I think of it as an automated scraping assistant that handles all the technicalities.<\/p>"},{"title":"How to scrape data from a website to Excel","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-web-scrape-in-excel\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-web-scrape-in-excel\/","description":"<p>Collecting data from websites and organizing it into a structured format like Excel can be super handy. Maybe you're building reports, doing research, or just want a neat spreadsheet with all the info you need. But copying and pasting manually? That's a time sink no one enjoys. In this guide, we'll discuss a few ways to scrape data from websites and save it directly into Excel.<\/p>\n<p>Together we'll talk about methods for both non-techies and devs, using everything from built-in Excel tools to coding your own solutions with Python. By the end, you'll have a clear picture of which method fits your needs the best.<\/p>"},{"title":"Pyppeteer: the Puppeteer for Python Developers","link":"https:\/\/www.scrapingbee.com\/blog\/pyppeteer\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/pyppeteer\/","description":"<p><strong>Pyppeteer<\/strong> is a handy way to let a browser do the repetitive work for you. The web is packed with useful data, but collecting it manually takes forever. Web scraping speeds things up by letting your code gather information on its own, and browser automation goes further by handling things like clicking, scrolling, and navigating just like a real user.<\/p>\n<p>Python already has plenty of scraping tools, but sometimes you need the power of a real browser without the extra weight or complexity. Pyppeteer fills that gap. It gives you a straightforward way to control a headless (or full) Chrome instance from Python, making it easier to scrape dynamic sites, load JavaScript-heavy pages, and automate tasks that simple HTTP requests can't handle.<\/p>"},{"title":"The Java Web Scraping Handbook","link":"https:\/\/www.scrapingbee.com\/java-webscraping-book\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/java-webscraping-book\/","description":"<p>This guide was originally written in 2018, and published here: <a href=\"https:\/\/www.javawebscrapinghandbook.com\/\" target=\"_blank\" >https:\/\/www.javawebscrapinghandbook.com\/<\/a>. We've decided to republish it for free on our website. A PDF version is also available.<\/p>\n<p><em>You don't have to give us your email to download to eBook, because like you, we hate that<\/em>: <a href=\"https:\/\/www.scrapingbee.com\/download\/webscrapinghandbook.pdf\" >DIRECT PDF VERSION<\/a>.<\/p>\n<p>Feel free to distribute it, but <em><strong>please include a link to the original content (this page)<\/strong><\/em>.<\/p>\n<hr>\n<p>Web scraping or crawling is the act of fetching data from a third party website by downloading and parsing the HTML code to extract the data you want. It can be done manually, but generally this term refers to the automated process of downloading the HTML content of a page, parsing\/extracting the data, and saving it into a database for further analysis or use.<\/p>"},{"title":"Using jQuery to Parse HTML and Extract Data","link":"https:\/\/www.scrapingbee.com\/blog\/html-parsing-jquery\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/html-parsing-jquery\/","description":"<p>Your web page may sometimes need to use information from other web pages that do not provide an API. For instance, you may need to fetch stock price information from a web page in real time and display it in a widget of your web page. However, some of the stock price aggregation websites don\u2019t provide APIs.<\/p>\n<p>In such cases, you need to retrieve the source HTML of the web page and manually find the information you need. This process of retrieving and manually parsing HTML to find specific information is known as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Web_scraping\" target=\"_blank\" >web scraping<\/a>.<\/p>"},{"title":"Web crawling with Python made easy: From setup to first scrape","link":"https:\/\/www.scrapingbee.com\/blog\/crawling-python\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/crawling-python\/","description":"<p><strong>Web crawling with Python<\/strong> sounds fancy, but it's really just teaching your computer how to browse the web for you. Instead of clicking links and copying data by hand, you write a script that does it automatically: visiting pages, collecting info, and moving on to the next one.<\/p>\n<p>In this guide, we'll go step by step through the whole process. We'll start from a tiny script using <code>requests<\/code> and <code>BeautifulSoup<\/code>, then level up to a scalable crawler built with Scrapy. You'll also see how to clean your data, follow links safely, and use ScrapingBee to handle tricky sites with JavaScript or anti-bot rules.<\/p>"},{"title":"XPath vs CSS selectors","link":"https:\/\/www.scrapingbee.com\/blog\/xpath-vs-css-selector\/","pubDate":"Sat, 17 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/xpath-vs-css-selector\/","description":"<h2 id=\"introduction\">Introduction<\/h2>\n<p>If you have already browsed our <a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >web scraping blog<\/a> a bit, you will probably have already come across our <a href=\"https:\/\/www.scrapingbee.com\/blog\/practical-xpath-for-web-scraping\/\" >introduction to XPath expressions<\/a>, as well as our article on <a href=\"https:\/\/www.scrapingbee.com\/blog\/using-css-selectors-for-web-scraping\/\" >using CSS selectors for web scraping<\/a> - if you haven't yet, highly recommended \ud83d\udc4d. Quite a few good reads.<\/p>\n<p>So you may already have a good idea of what they do and how they are used, but what might be missing - to complete the picture - is how they compare to each other. That's exactly what we are going to do in today's article.<\/p>"},{"title":"Getting Started with Apache Nutch","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-apache-nutch\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-apache-nutch\/","description":"<p>Web crawling is often confused with <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a>, which is simply extracting specific data from web pages. A <a href=\"https:\/\/www.scrapingbee.com\/blog\/crawling-python\/\" target=\"_blank\" >web crawler<\/a> is an automated program that helps you find and catalog relevant data sources.<\/p>\n<p>Typically, a crawler first makes requests to a list of known web addresses and, from their content, identifies other relevant links. It adds these new URLs to a queue, iteratively takes them out, and repeats the process until the queue is empty. The crawler stores the extracted data\u2014like web page content, meta tags, and links\u2014in a database.<\/p>"},{"title":"How to Bypass CreepJS and Spoof Browser Fingerprinting","link":"https:\/\/www.scrapingbee.com\/blog\/creepjs-browser-fingerprinting\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/creepjs-browser-fingerprinting\/","description":"<p><a href=\"https:\/\/github.com\/abrahamjuliot\/creepjs\" target=\"_blank\" >CreepJS<\/a> is an open-source project designed to demonstrate vulnerabilities and leaks in extensions or browsers that users use to avoid being fingerprinted. It\u2019s one of the newest projects in the browser fingerprinting scene, and it uses an advanced combination of techniques such as JavaScript tampering detection and finding inconsistencies between the detected user agent and the expected feature set.<\/p>\n<p>In this tutorial, we\u2019ll see how the most popular headless browsers stack up against each other in an all-out battle to pass CreepJS\u2019s \u201cHeadless\u201d and \u201cStealth\u201d detection scores.<\/p>"},{"title":"How to parse HTML in Python: A step-by-step guide for beginners","link":"https:\/\/www.scrapingbee.com\/blog\/python-html-parsers\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-html-parsers\/","description":"<p>If you've ever tried to pull data from a website (prices, titles, reviews, links, whatever) you've probably hit that wall called <strong>how to parse HTML in Python<\/strong>. The web runs on HTML, and turning messy markup into clean, structured data is one of those rites of passage every dev goes through sooner or later.<\/p>\n<p>This guide walks you through the whole thing, step by step: fetching pages, parsing them properly, and doing it in a way that won't make websites hate you. We'll start simple, then jump into a real-world setup using ScrapingBee, which quietly handles the messy stuff like JavaScript rendering, IP rotation, and anti-bot headaches.<\/p>"},{"title":"How to Parse HTML in Ruby with Nokogiri?","link":"https:\/\/www.scrapingbee.com\/blog\/parse-html-nokogiri\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/parse-html-nokogiri\/","description":"<p>APIs are the cornerstone of the modern internet as they enable different services to communicate with each other. With APIs, you can gather information from different sources and use different services. However, not all services provide an API for you to consume. Even if an API is offered, it might be limited in comparison to a service\u2019s web application(s). Thankfully, you can use web scraping to overcome these limitations. <em>Web scraping<\/em> refers to the practice of extracting data from the HTML source of the web page. That is, instead of communicating with a server through APIs, web scraping lets you extract information directly from the web page itself.<\/p>"},{"title":"How to scrape all text from a website for LLM training","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-all-text-from-a-website-for-llm-ai-training\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-all-text-from-a-website-for-llm-ai-training\/","description":"<p>Artificial Intelligence (AI) is rapidly becoming a part of everyday life, and with it, the demand for training custom models has increased. Many people these days would like to train their very own... AI, not dragon, duh! One crucial step in training any language model (LLM) is gathering a significant amount of text data. In this article, I'll show you how to collect text data from all pages of a website using web scraping techniques. We'll build a custom Python script to automate this process, making it easy to gather the data you need for your model training.<\/p>"},{"title":"How to Scrape Wikipedia with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-wikipedia\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-wikipedia\/","description":"<p>Ever wanted to extract valuable insights and data from largest encyclopedias online? Then it is it to learn how to scrape Wikipedia pages! As one of the biggest treasuries of structured content, it is constantly reviewed and fact-checked by fellow users, or at least provide valuable insights and links to sources.<\/p>\n<p>Wikipedia has structured content but scraping can be tricky due to rate limiting, which restricts repeated connection requests to websites. Fortunately, our powerful tools can overcome these hurdles, ensuring efficient data extraction in a clean HTML or JSON format.<\/p>"},{"title":"How to submit a form with Puppeteer?","link":"https:\/\/www.scrapingbee.com\/blog\/submit-form-puppeteer\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/submit-form-puppeteer\/","description":"<p>In this article, we will take a look at how to automate form submission using Puppeteer. <a href=\"https:\/\/pptr.dev\/\" target=\"_blank\" >Puppeteer<\/a> is an open source Node library that provides a high-level API to control Chrome or Chromium based browsers over the <a href=\"https:\/\/chromedevtools.github.io\/devtools-protocol\/\" target=\"_blank\" >DevTools Protocol<\/a>. Every tasks that you can perform with a Chrome browser can be automated with Puppeteer. This makes Puppeteer an ideal tool for web scraping and test automation. In this article, we will go over everything you need to know about automating form submission with Puppeteer. We will discuss<\/p>"},{"title":"OCaml Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/ocaml-web-scraping\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ocaml-web-scraping\/","description":"<p><a href=\"https:\/\/ocaml.org\/\" target=\"_blank\" >OCaml<\/a> is a modern, type-safe, and expressive functional programming language. Even though it's less commonly used than popular languages like Python or Java, you can create powerful applications like <a href=\"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping\/\" target=\"_blank\" >web scrapers<\/a> with it.<\/p>\n<p>In this article, you'll learn how to scrape static and dynamic websites with OCaml.<\/p>\n<p>To follow along, you'll need to have OCaml installed on your computer, OPAM initialized, and Dune installed. All of these steps are explained in the <a href=\"https:\/\/ocaml.org\/install\" target=\"_blank\" >official installation instructions<\/a>, so go ahead and set up the development environment before you continue.<\/p>"},{"title":"Web Scraping with Goutte: Step-by-Step Guide 2026","link":"https:\/\/www.scrapingbee.com\/blog\/laravel-web-scraper\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/laravel-web-scraper\/","description":"<p>If you\u2019re diving into web scraping with PHP, chances are you\u2019ve come across Goutte, a lightweight, elegant library built on Symfony components. Even in 2026, Goutte remains a solid choice for scraping simple, static websites, especially when paired with frameworks like Laravel.<\/p>\n<p>In this guide, I\u2019ll walk you through setting up Goutte, building basic scrapers, and understanding its limitations. Plus, I\u2019ll show you how to extend Goutte\u2019s power with ScrapingBee's <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >scraper API<\/a>, a modern API that handles JavaScript rendering and scales your scraping projects effortlessly.<\/p>"},{"title":"Web Scraping with Groovy","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-groovy\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-groovy\/","description":"<p><a href=\"https:\/\/groovy-lang.org\" target=\"_blank\" >Groovy<\/a> has been around for quite a while and has established itself as reliable scripting language for tasks where you'd like to use the full power of Java and the JVM, but without all its verbosity.<\/p>\n<p>While typical use-cases often are build pipelines or automated testing, it works equally well for anything related to data extraction and web scraping. And that's precisely, what we are going to check out in this article. <strong>Let's fasten our seatbelts and dive right into web scraping and handling HTTP requests with Groovy.<\/strong><\/p>"},{"title":"Web scraping with JavaScript and Node.js","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/","description":"<p>JavaScript is everywhere these days, and <strong>web scraping in Node.js and JavaScript<\/strong> has become way easier thanks to how far the whole ecosystem has come. With Node giving JS a fast, server-side runtime, you can pull data from websites just as easily as you build web or mobile apps.<\/p>\n<p>In this article, we'll walk through how the Node.js toolbox lets you scrape the web efficiently and handle most real-world scraping needs without breaking a sweat.<\/p>"},{"title":"What is data parsing?","link":"https:\/\/www.scrapingbee.com\/blog\/data-parsing\/","pubDate":"Fri, 16 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/data-parsing\/","description":"<p>Data parsing is the process of taking data in one format and transforming it to another format. You'll find parsers used everywhere. They are commonly used in compilers when we need to parse computer code and generate machine code.<\/p>\n<p>This happens all the time when developers write code that gets run on hardware. Parsers are also present in SQL engines. SQL engines parse a SQL query, execute it, and return the results.<\/p>"},{"title":"'JMAP (YC S10) Linux Inside is hiring': the quest for the best Hacker News title","link":"https:\/\/www.scrapingbee.com\/blog\/quest-best-hacker-news-title\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/quest-best-hacker-news-title\/","description":"<h2 id=\"introduction\">Introduction<\/h2>\n<p>For those of you who don't know, <a href=\"https:\/\/news.ycombinator.com\" target=\"_blank\" >Hacker News<\/a> is a successful social news website focusing on computer science and entrepreneurship visited by more than 10m people per month (source: SimilarWeb).<\/p>\n<p>Founded by Paul Graham, it works similarly to Reddit, users submit contents which can be upvoted by the community.\nThe most upvoted content, mostly links, then reach the front-page, resulting in tens of thousands of visits for the lucky website.<\/p>"},{"title":"10 Tips on How to make Python's Beautiful Soup faster when scraping","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-make-pythons-beautiful-soup-faster-performance\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-make-pythons-beautiful-soup-faster-performance\/","description":"<p>Beautiful Soup is super easy to use for parsing HTML and is hugely popular. However, if you're extracting a gigantic amount of data from tons of scraped pages it can slow to a crawl if not properly optimized.<\/p>\n<p>In this tutorial, I'll show you 10 expert-level tips and tricks for transforming Beautiful Soup into a blazing-fast data-extracting beast and how to optimize your scraping process to be as fast as lightning.<\/p>"},{"title":"Best Antidetect Browsers Listed: Top Tools for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/anti-detect-browser\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/anti-detect-browser\/","description":"<p>In 2026, the landscape of online detection and tracking has become more sophisticated than ever. Whether you're managing multiple social media accounts, running affiliate marketing campaigns, or automating e-commerce operations, the need for tools that help you stay under the radar has skyrocketed.<\/p>\n<p>This is why anti-detect browsers are so important. These specialized browsers help mask your digital fingerprint, allowing you to operate multiple accounts without getting flagged or blocked.<\/p>"},{"title":"Create a sitemap link extractor using ScrapingBee in N8N","link":"https:\/\/www.scrapingbee.com\/blog\/create-a-sitemap-link-extractor-using-scrapingbee-in-n8n\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/create-a-sitemap-link-extractor-using-scrapingbee-in-n8n\/","description":"<p>I want to scrape a website, but wait, how do I get the links?<\/p>\n<p>Good question! That's exactly what we are going to answer in this blog post.<\/p>\n<p>While there are multiple options for this, we are going with an easy route, that is, Extracting links from sitemap!<\/p>\n<p>Most websites on the internet provides all of their links in a sitemap.xml or similar file. The reason they create this is to make it easier for search engines to find the website links.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAA2klEQVR4nIyR72rEMAzDJcdJL&#43;Pe\/0FH\/7eOhy\/HGCyDCZoPbX&#43;yYum8LO6&#43;bue6H0mElEetRMgdZJyqSVLCL&#43;n2udh9NtEi4g647fMM4KNqWJAEMosmDuBnfZ7XyUnMGglS3J2EhJe31rLmSUuadADHMIAxA3ifTkp8E9WHkhQZZI4fRi\/feS&#43;77DA46lRTGfAytPyWxxrwCjfQACbp7mbWYYev&#43;3Yf1z9jh5q3LKold69Xaf1Cf8M9ZywJ4ownl4JBTR2OYigiPzP3nkDcZqk1SePVfAUAAP\/\/3uplX&#43;uNRIUAAAAASUVORK5CYII=); background-size: cover\">\n <svg width=\"1200\" height=\"628\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/create-a-sitemap-link-extractor-using-scrapingbee-in-n8n\/cover_hu10825251409134839697.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/create-a-sitemap-link-extractor-using-scrapingbee-in-n8n\/cover_hu10825251409134839697.png\"\n width=\"1200\" height=\"628\"\n alt='Create a sitemap link extractor using ScrapingBee in N8N blog post cover'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/create-a-sitemap-link-extractor-using-scrapingbee-in-n8n\/cover_hu10825251409134839697.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/create-a-sitemap-link-extractor-using-scrapingbee-in-n8n\/cover.png\"\n width=\"1200\" height=\"628\"\n alt='Create a sitemap link extractor using ScrapingBee in N8N blog post cover'>\n <\/noscript>\n<\/div>\n\n<br>\n<\/p>"},{"title":"Free AI Powered Proxy Scraper for Getting Fresh Public Proxies","link":"https:\/\/www.scrapingbee.com\/blog\/free-ai-powered-proxy-scraper-for-getting-fresh-public-proxies\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/free-ai-powered-proxy-scraper-for-getting-fresh-public-proxies\/","description":"<p>Proxies are your ultimate cheat code, helping you bypass the anti-scraping bosses guarding valuable data behind firewalls and restrictions. This guide shows you how to obtain free proxies with an <a href=\"https:\/\/www.scrapingbee.com\/features\/ai-web-scraping-api\/\" target=\"_blank\" >AI-powered scraper API<\/a>, saving you time and money while leveling up your scraping game like a pro.<\/p>\n<p>Free proxies are listed by several sources on the internet, and they usually allow us to filter by protocol type, country, and other parameters. <a href=\"https:\/\/www.scrapingbee.com\/blog\/best-free-proxy-list-web-scraping\/\" target=\"_blank\" >In a previous blog post, we looked at some of these sources and tested them for various quality parameters.<\/a> (In the context of proxies, quality would refer to whether the proxy actually works or not, and also the time it takes to complete a request.) In this tutorial we'll show you how to scrape fresh public proxies from any source and evaluate them to figure out which ones are working.<\/p>"},{"title":"How to Build a Fast Scraping Bot: 2x Speed with Python Threading","link":"https:\/\/www.scrapingbee.com\/blog\/scraping-bot\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scraping-bot\/","description":"<p>Looking for a way to build a fast scraping bot that actually meets speed expectations? You\u2019re about to discover a proven method that doubles your scraping output while staying under the radar of anti-bot systems.<\/p>\n<p>Most developers face slow, sequential scraping that takes forever to gather meaningful data. But here\u2019s the truth: with proper threading implementation and ScrapingBee\u2019s reliable API, you can turn your sluggish scraper into a high-performance data collection machine. In this guide, I\u2019ll walk you through building a resilient scraping bot using Python threading techniques that I\u2019ve personally tested on various websites.<\/p>"},{"title":"How to bypass reCAPTCHA & hCaptcha when web scraping","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-bypass-recaptcha-and-hcaptcha-when-web-scraping\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-bypass-recaptcha-and-hcaptcha-when-web-scraping\/","description":"<h2 id=\"introduction\">Introduction<\/h2>\n<p>CAPTCHA - <strong>C<\/strong>ompletely <strong>A<\/strong>utomated <strong>P<\/strong>ublic <strong>T<\/strong>uring test to tell <strong>C<\/strong>omputers and <strong>H<\/strong>umans <strong>A<\/strong>part! All these little tasks and riddles you need to solve before a site lets you proceed to the actual content.<\/p>\n<blockquote>\n<p>\ud83d\udca1 Want to skip ahead and try to avoid CAPTCHAs?<\/p>\n<p>At ScrapingBee, it is our goal to provide you with the right tools to avoid triggering CAPTCHAs in the first place. Our <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> has been carefully tuned so that your requests are unlikely to get stopped by a CAPTCHA, give it a go.<\/p>"},{"title":"How To Set Up A Rotating Proxy in Selenium with Python","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-set-up-a-rotating-proxy-in-selenium-with-python\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-set-up-a-rotating-proxy-in-selenium-with-python\/","description":"<p><a href=\"https:\/\/pypi.org\/project\/selenium\/\" target=\"_blank\" >Selenium<\/a> is a popular browser automation library that allows you to control headless browsers programmatically. However, even with Selenium, your script can still be identified as a bot and your IP address can be blocked. This is where Selenium proxies come in.<\/p>\n<p>A proxy acts as a middleman between the client and server. When a client makes a request through a proxy, the proxy forwards it to the server. This makes detecting and blocking your IP harder for the target site.<\/p>"},{"title":"Ruby HTML and XML Parsers","link":"https:\/\/www.scrapingbee.com\/blog\/ruby-html-parser\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ruby-html-parser\/","description":"<p>Ruby HTML and XML Parsers<\/p>\n<p>Extracting data from the web\u2014that is, web scraping\u2014typically requires reading and processing content from HTML and XML documents. <em>Parsers<\/em> are software tools that facilitate this scraping of web pages.<\/p>\n<p>The Ruby developer community offers some fantastic HTML and XML parsers that can serve all your web scraping needs\u2014there are a lot of options out there. In choosing which to go with, you might consider the following criteria:<\/p>"},{"title":"Study of Amazon\u2019s Best Selling & Most Read Book Charts Since 2017","link":"https:\/\/www.scrapingbee.com\/blog\/study-of-amazons-best-selling-and-most-read-book-charts-since-2017\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/study-of-amazons-best-selling-and-most-read-book-charts-since-2017\/","description":"<p>Amazon is most well known as an online shopping website, and among the tech folks for Amazon Web Services. However, it was initially started as an online bookstore. They are also well known for the Kindle eBook and the Audiobook experiences they offer.<\/p>\n<p>The extensive offerings in the literature space have given Amazon so much data about reading patterns on a global scale. They present this data by publishing 4 charts every week. These 4 charts are the most read and the most sold books in fiction and non-fiction categories in the USA.<\/p>"},{"title":"The best Python HTTP clients for web scraping","link":"https:\/\/www.scrapingbee.com\/blog\/best-python-http-clients\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-python-http-clients\/","description":"<p>Alright, let's set the stage. When you start looking for the <strong>best Python HTTP clients<\/strong> for web scraping, you quickly realize the ecosystem is absolutely overflowing. A quick Github search pulls up more than <em>1,800 results<\/em>, which is enough to make anyone go: &quot;bro, what the hell am I even looking at?&quot;<\/p>\n<p>And yeah, choosing the right one depends on your setup more than people admit. Scraping on a single machine? Whole cluster of hungry workers? Keeping things dead simple? Or chasing raw speed like your scraper is training for the Olympics? A tiny web app pinging a microservice once in a while needs a totally different tool than a script hammering endpoints all day long. Add to that the classic concern: &quot;will this library still exist six months from now, or will it vanish like half of my side projects?&quot;<\/p>"},{"title":"Top 15 Scraper Sites to Enhance Your Data Collection Skills","link":"https:\/\/www.scrapingbee.com\/blog\/scraper-sites\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scraper-sites\/","description":"<p>If you\u2019re ready to dip your feet into web scraping, you probably need some of the best websites to practise web scraping. You're in luck. These are specially designed web scraping websites that let you hone your data extraction skills without worrying about legal issues or accidentally hammering a live site. Think of them as your personal playgrounds for learning how to scrape efficiently and ethically.<\/p>\n<p>I remember when I first started scraping, I was nervous about breaking something or getting blocked. If you're like me, you alleviate your worries with these test sites that'll give you confidence to experiment with different techniques. And when you\u2019re ready to take things up a notch, platforms like <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee<\/a> let you test and scale your scrapers in real-world conditions, complete with free credits to get you started.<\/p>"},{"title":"Web Scraping with Visual Basic","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-visual-basic\/","pubDate":"Thu, 15 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-visual-basic\/","description":"<p>In this tutorial, you will learn how to <a href=\"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping\/\" target=\"_blank\" >learn how to scrape websites<\/a> using Visual Basic.<\/p>\n<p>Don't worry\u2014you won't be using any actual scrapers or metal tools. You'll just be using some good old-fashioned code. But you might be surprised at just how messy code can get when you're dealing with web scraping!<\/p>\n<p>You will start by scraping a static HTML page with an HTTP client library and parsing the result with an HTML parsing library. Then, you will move on to scraping dynamic websites using Puppeteer, a headless browser library. The tutorial also covers basic web scraping techniques, such as using CSS selectors to extract data from HTML pages.<\/p>"},{"title":"Are Product Hunt's featured products still online today?","link":"https:\/\/www.scrapingbee.com\/blog\/producthunt-cemetery\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/producthunt-cemetery\/","description":"<p>Releasing any new product these days is a competitive business. Mountains of new products appear daily, complete with well produced intro videos with every new competitor bearing a striking resemblance to one other. But how many of the products of the past stood out from the crowd and continue to remain online today?<\/p>\n<p>In this article I'll be showing how to query the Product Hunt API to collect data. We collected information from all the featured products from Product Hunts 8-year history to determine how many of them still exist online or have disappeared into the tech wilderness. Along the way we'll also discover other interesting insights into the dataset.<\/p>"},{"title":"Best Real Estate APIs for Developers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-real-estate-apis-for-developers\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-real-estate-apis-for-developers\/","description":"<p>We all know how crucial it is to have our fingers on the pulse of the property market, right? That\u2019s where real estate APIs come in. The best real estate APIs have become essential tools, providing structured access to property listings, valuations, rental analytics, neighborhood insights, and more.<\/p>\n<p>But here\u2019s the thing: APIs aren\u2019t always perfect. Sometimes they fall short on coverage, hit you with tough rate limits, or struggle to keep up with the lightning-fast pace of the market.<\/p>"},{"title":"Crawlee for Python Tutorial with Examples","link":"https:\/\/www.scrapingbee.com\/blog\/crawlee-for-python-tutorial-with-examples\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/crawlee-for-python-tutorial-with-examples\/","description":"<p>Crawlee is a brand new, free &amp; open-source (FOSS) web scraping library built by the folks at APIFY. While it is available for both Node.js and Python, we'll be looking at the Python library in this brief guide. It's barely been a few weeks since its release and the library has already amassed about 2800 stars on GitHub! Let's see what it's all about and why it got all those stars.<\/p>"},{"title":"Google Ads Competitor Analysis: 4 Battle-Tested Methods","link":"https:\/\/www.scrapingbee.com\/blog\/google-ads-competitor-analysis-system\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/google-ads-competitor-analysis-system\/","description":"<p>You're reviewing your <a href=\"https:\/\/ads.google.com\/home\/\" target=\"_blank\" >Google Ads dashboard<\/a> on a Monday morning, coffee in hand, when you notice your cost-per-click has mysteriously skyrocketed over the weekend. Your best-performing keywords are suddenly bleeding money, and your once-reliable ad positions are slipping. Sound familiar?<\/p>\n<p>In my years of experience with PPC campaigns and developing web <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >scraping<\/a> solutions, I've learned that in the high-stakes world of Google Ads, flying blind to your competitors' moves isn't just risky \u2013 it's expensive.<\/p>"},{"title":"How to Build Unbreakable Anti-Scraping Protection in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/anti-scraping\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/anti-scraping\/","description":"<p>The digital battlefield has never been more intense. With automated bots now accounting for approximately 50% of all internet traffic, building robust anti-scraping protection has become a critical business imperative. Whether you\u2019re protecting proprietary data, maintaining competitive advantages, or simply ensuring your servers don\u2019t buckle under excessive requests from web scraper operations, the stakes have never been higher.<\/p>\n<p>In my experience working with both scraping and protection systems, I\u2019ve witnessed firsthand how the\u00a0anti-scraping systems struggle to\u00a0defend against intruders. Modern automated bots are sophisticated, using residential proxies, browser automation, and AI-powered evasion techniques that can mimic human users with startling accuracy. Only a handful of services, such as ScrapingBee, can navigate the scraping process ethically and respectfully.<\/p>"},{"title":"How to download an image with Python?","link":"https:\/\/www.scrapingbee.com\/blog\/download-image-python\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/download-image-python\/","description":"<p>If you've ever tried to Python download image from URL, you already know the theory looks stupidly simple: call <code>requests.get()<\/code> and boom \u2014 image saved. Except that's not how the real world usually works. Sites block bots, images hide behind JavaScript, redirects go in circles, and bulk downloads crumble if you're not streaming, retrying, or handling files properly.<\/p>\n<p>This guide takes the actually useful route: how to stream images safely, name files without creating a junkyard, avoid duplicates, scale to thousands of downloads, and bring in ScrapingBee when a site decides to get spicy. By the end, you'll have a toolkit that works on real websites, not toy examples.<\/p>"},{"title":"How to scrape data from Twitter.com","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-twitter\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-twitter\/","description":"<p>Twitter is a gold mine for data. It started as a micro-blogging website and has quickly grown to become the favorite hangout spot for millions of people. Twitter provides access to most of its data via its official API but sometimes that is not enough.<\/p>\n<p>Web scraping provides some advantages over using the official API. For example, Twitter's API is rate-limited and you need to wait for a while before Twitter approves your application request and lets you access its data but this is not the case with web scraping.<\/p>"},{"title":"How to Scrape Etsy: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-etsy\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-etsy\/","description":"<p>In this guide, I'll teach you how to scrape Etsy, one of the most popular marketplaces for handmade and vintage items. If you've ever tried scraping Etsy before, you know it's not exactly a walk in the park. The website's anti-bot protections, such as CAPTCHA, IP address flagging, and constant updates, make web scraping Etsy product data a challenge.<\/p>\n<p>That\u2019s why ScrapingBee's Etsy scraper is the best tool to get the job done. It's a reliable web scraper that helps you capture real-time data from Etsy listings. It's built to handle all complex parts with JavaScript rendering and proxy rotation. With our API at hand, you can focus on extracting the data you need: Etsy product titles, prices, shop names, and more.<\/p>"},{"title":"No-code web scraping","link":"https:\/\/www.scrapingbee.com\/blog\/no-code-web-scraping\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/no-code-web-scraping\/","description":"<p>You can create software without code.<\/p>\n<p><strong>It\u2019s crazy, right?<\/strong><\/p>\n<p>There are many tools that you can use to build fully functional software. They can do anything you want. Without code.<\/p>\n<p>You might be thinking to yourself, what if I need something complex, like a <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraper<\/a>? That's too much, right?<\/p>\n<p>To create a web scraper, you need to create a code block to <strong>load the page<\/strong>. Then, you need another module <strong>to parse it<\/strong>. Next, you build another block to deal with this <strong>information and run actions<\/strong>. Also, you have to find ways to <strong>deal with IP blocks<\/strong>. To make matters worse, you might need <strong>to interact with the target page<\/strong>. Clicking buttons, waiting for elements, taking screenshots.<\/p>"},{"title":"The Best Guide to Using Helium Scraper for Efficient Data Extraction","link":"https:\/\/www.scrapingbee.com\/blog\/helium-scraper\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/helium-scraper\/","description":"<p>When it comes to web scraping, choosing the right tool can make all the difference between a smooth project and a frustrating ordeal. Two popular options that often come up in scraping conversations are Helium Scraper and ScrapingBee.<\/p>\n<p>Helium Scraper is a desktop, no-code scraper designed for small tasks, perfect if you want something visual and straightforward. On the other hand, ScrapingBee is a cloud-based <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> built for scalable, automated scraping, ideal for developers, enthusiasts, and enterprises.<\/p>"},{"title":"Web Scraping with Elixir","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-elixir\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-elixir\/","description":"<p>Web scraping is the process of extracting data from a website. Scraping can be a powerful tool in a developer's arsenal when they're looking at problems like automation or investigation, or when they need to collect data from public websites that lack an API or provide limited access to the data.<\/p>\n<p>People and businesses from a myriad of different backgrounds use web scraping, and it's more common than people realize. In fact, if you've ever copy-pasted code from a website, you've performed the same function as a web scraper\u2014albeit in a more limited fashion.<\/p>"},{"title":"Web Scraping with Html Agility Pack","link":"https:\/\/www.scrapingbee.com\/blog\/html-agility-pack\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/html-agility-pack\/","description":"<p>For any project that pulls content from the web in C# and parses it to a usable format, you will most likely find the HTML Agility Pack. The Agility Pack is standard for <a href=\"https:\/\/www.scrapingbee.com\/blog\/csharp-html-parser\/\" target=\"_blank\" >parsing HTML content in C#<\/a>, because it has several methods and properties that conveniently work with the DOM. Instead of writing your own parsing engine, the HTML Agility Pack has everything you need to find specific DOM elements, traverse through child and parent nodes, and retrieve text and properties (e.g., HREF links) within specified elements.<\/p>"},{"title":"Web Scraping With Linux And Bash","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-linux-and-bash\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-linux-and-bash\/","description":"<p>Please brace yourselves, we'll be going deep into the world of Unix command lines and shells today, as we are finding out more about how to use the Bash for scraping websites.<\/p>\n<p><em>Let's fasten our seatbelts and jump right in<\/em> \ud83c\udfc1<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAAy0lEQVR4nGL5\/\/8\/A7mAiWydWDT\/&#43;f3347svyCL\/\/\/3\/&#43;f3Pz&#43;9\/fnz\/\/ef3P2QpFjTNb56&#43;Pn\/kvHOoKxs7VOrD&#43;y\/7dp39x8Dw798\/bS0FHUMlnDZ\/&#43;vBFTkX05rX7cJFfv349eXX75fu7dx5fef\/xLU5nf\/v04eeby4KCXH\/e3\/33DxqQLCzM3Jw8fFy8ogJCnBzsyOoZkUP7z&#43;\/f3969YGDmYmX9z8EnwsgI9vP\/\/79&#43;\/IYoY2VjYWZhxq6ZVEDVqCIJAAIAAP\/\/OKxdHfIG2GQAAAAASUVORK5CYII=); background-size: cover\">\n <svg width=\"1200\" height=\"628\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/web-scraping-with-linux-and-bash\/cover_hu15494476467341146333.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-linux-and-bash\/cover_hu15494476467341146333.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/web-scraping-with-linux-and-bash\/cover_hu15494476467341146333.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-linux-and-bash\/cover.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"why-scraping-with-bash\">Why Scraping With Bash?<\/h2>\n<p>If you happened to have already read a few of our other articles (e.g. <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/\" >web scraping in Python<\/a> or <a href=\"https:\/\/www.scrapingbee.com\/blog\/introduction-to-chrome-headless\/\" >using Chrome from Java<\/a>), you'll be probably already familiar with the level of convenience those high-level languages provide when it comes to crawling and scraping the web. And, while there are plenty of examples of full-fledged applications written in Bash (e.g. an entire <a href=\"http:\/\/nanoblogger.sourceforge.net\/\" target=\"_blank\" >web CMS<\/a>, an <a href=\"https:\/\/lists.gnu.org\/archive\/html\/bug-bash\/2001-02\/msg00054.html\" target=\"_blank\" >Intel assembler<\/a>, a <a href=\"https:\/\/testssl.sh\/\" target=\"_blank\" >TLS validator<\/a>, a full <a href=\"https:\/\/github.com\/dzove855\/Bash-web-server\" target=\"_blank\" >web server<\/a>), probably few people will argue that Bash scripts are the <em>most ideal<\/em> environment for large, complex programs. So the question why somebody would suddenly use Bash, is not completely out of the blue and may be a justified question.<\/p>"},{"title":"Web Scraping with Perl","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-perl\/","pubDate":"Wed, 14 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-perl\/","description":"<p>Web scraping is a technique for retrieving data from web pages. While one could certainly load any site in their browser and copy-paste the relevant data manually, this hardly scales and so web scraping is a task destined for automation. If you are curious why one would scrape the web(\/blog\/what-is-web-scraping\/#web-scraping-use-cases), you'll find a myriad of reasons for that:<\/p>\n<ul>\n<li>Generating leads for marketing<\/li>\n<li>Monitoring prices on a page (and purchase when the price drops low)<\/li>\n<li>Academic research<\/li>\n<li><a href=\"https:\/\/en.wikipedia.org\/wiki\/Arbitrage_betting\" target=\"_blank\" >Arbitrage betting<\/a><\/li>\n<\/ul>\n<p>Perl is universally considered the &quot;Swiss Army knife of programming&quot; and there is a good reason for that, as it particularly excels in text processing and handling of textual input of any sort. This makes it a perfect companion for web scraping, which is inherently text-centric.<\/p>"},{"title":"AI and the Art of Reddit Humor: Mapping Which Countries Joke the Most","link":"https:\/\/www.scrapingbee.com\/blog\/global-subreddit-humor-analysis-with-ai\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/global-subreddit-humor-analysis-with-ai\/","description":"<p>Making jokes on the internet is a fine art and Reddit users globally are working diligently to keep the dad jokes coming, because the only thing better than winning an internet argument is winning an internet upvote contest with a punchline your dad would be proud of.<\/p>\n<p>In fact, Reddit's vast reservoir of dad jokes may just be the secret ingredient that helped it reach a staggering $6.4 billion valuation at its recent IPO. Who knew that jokes your dad repeats at every family gathering could be worth their weight in Reddit Gold? But which country attempts to make the highest proportion of jokes in their comment sections?<\/p>"},{"title":"Axios set headers: The complete guide for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/axios-headers\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/axios-headers\/","description":"<p>Any time you start wiring up API calls, headers become part of the game right away. Things like auth tokens, content types, or custom metadata all need a place to live, and Axios gives you a clean way to manage them. That's where <strong>Axios set headers<\/strong> patterns come in: simple tools that help you keep requests organized without repeating yourself.<\/p>\n<p>This guide walks through the approaches devs actually use in 2026: per-request headers, global defaults, interceptors, dynamic values, and the troubleshooting steps that save you from chasing weird bugs at 2 a.m.<\/p>"},{"title":"How to Build a VBA Web Scraper in Excel: 2026 Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/vba-web-scraping\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/vba-web-scraping\/","description":"<p>Looking for how to build a VBA web scraper in Excel? If you're just starting out, there are quite a few things you need to learn. The process has evolved significantly in 2026, especially with Internet Explorer\u2019s complete deprecation.<\/p>\n<p>In this guide, I\u2019ll walk you through creating a modern, reliable Excel VBA scraper that leverages an application programming interface instead of brittle browser automation. This method will save you maintenance headaches and let you\u00a0perform web scraping\u00a0directly from an\u00a0Excel workbook.<\/p>"},{"title":"How to bypass cloudflare antibot protection at scale in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-bypass-cloudflare-antibot-protection-at-scale\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-bypass-cloudflare-antibot-protection-at-scale\/","description":"<p>Over\u00a0<a href=\"https:\/\/backlinko.com\/cloudflare-users#cloudfare-key-stats\" target=\"_blank\" >7.59 million<\/a>\u00a0active websites use Cloudflare. The website you intend to scrape might be protected by it. Websites protected by services like Cloudflare can be challenging to scrape due to the various anti-bot measures they implement. If you've tried scraping such websites, you're likely already aware of the difficulty of bypassing Cloudflare's bot detection system.<\/p>\n<p>Bypassing Cloudflare becomes a near-necessity for large-scale projects or scraping popular websites. There are various methods to bypass Cloudflare, each with its pros and cons. In this guide, we'll explore each method in detail, allowing you to choose the one that best suits your needs.<\/p>"},{"title":"How to Scrape IMDb: Step-by-Step with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-imdb\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-imdb\/","description":"<p>If you want to learn how to scrape IMDb data, you\u2019re in the right place. This step-by-step tutorial shows you how to extract data, including movie details, ratings, actors, and review dates, using a Python script. You\u2019ll see how to set up the required libraries, process the HTML content, and store your results in a CSV file for further analysis using ScrapingBee\u2019s API.<\/p>\n<p>Why ScrapingBee? Here's the thing \u2013 if you want to scrape IMDb data, you need an infrastructure of proxies, JavaScript rendering, and other tools to avoid IP blocks. Scraping this website is particularly challenging due to its strict anti-scraping measures, with no exceptions. But setting up everything manually costs time and resources.<\/p>"},{"title":"How to Scrape TikTok: Scrape Profile Stats and Videos","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-tiktok\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-tiktok\/","description":"<p>Are you a data analyst thirsty for social media insights and trends? A Python developer looking for a practical social media scraping project? Maybe you're a social media manager tracking metrics or a content creator wanting to download and analyze your TikTok data? If any of these describe you, you're in the right place!<\/p>\n<p><a href=\"https:\/\/www.tiktok.com\/\" target=\"_blank\" >TikTok<\/a>, the social media juggernaut, has taken the world by storm. TikTok's global success is reflected in its numbers:<\/p>"},{"title":"How to use a proxy with node-fetch?","link":"https:\/\/www.scrapingbee.com\/blog\/proxy-node-fetch\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/proxy-node-fetch\/","description":"<p>If you're trying to set up a <strong>Node fetch proxy<\/strong> for scraping or high-volume crawling, you'll quickly notice that neither native <code>fetch<\/code> nor <code>node-fetch<\/code> has built-in proxy configuration (like a <code>proxy<\/code> option or automatic <code>HTTP(S)_PROXY<\/code> support). With node-fetch you need to wire an Agent (e.g. <code>HttpsProxyAgent<\/code>); with native fetch you need an Undici dispatcher or, on Node 24+, NODE_USE_ENV_PROXY.<\/p>\n<p><a href=\"https:\/\/www.scrapingbee.com\/blog\/node-fetch\/\" target=\"_blank\" >Node-fetch<\/a> was originally built to bring the browser's <code>fetch<\/code> API into Node. Even though modern Node now ships with its own <code>fetch<\/code>, the idea stays the same: give devs a simple, flexible way to fire off async HTTP requests on the server.<\/p>"},{"title":"How to use asyncio to scrape websites with Python","link":"https:\/\/www.scrapingbee.com\/blog\/async-scraping-in-python\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/async-scraping-in-python\/","description":"<p>In this article, we'll take a look at how you can use Python and its coroutines, with their <code>async<\/code>\/<code>await<\/code> syntax, to efficiently scrape websites, without having to go all-in on threads \ud83e\uddf5 and semaphores \ud83d\udea6. For this purpose, we'll check out <a href=\"https:\/\/docs.python.org\/3\/library\/asyncio.html\" target=\"_blank\" >asyncio<\/a>, along with the asynchronous HTTP library <a href=\"https:\/\/docs.aiohttp.org\" target=\"_blank\" >aiohttp<\/a>.<\/p>\n<h2 id=\"what-is-asyncio\">What is asyncio?<\/h2>\n<p><a href=\"https:\/\/docs.python.org\/3\/library\/asyncio.html\" target=\"_blank\" >asyncio<\/a> is part of Python's standard library (yay, no additional dependency to manage \ud83e\udd73) which enables the implementation of concurrency using the same asynchronous patterns you may already know from JavaScript and other languages: <code>async<\/code> and <code>await<\/code><\/p>"},{"title":"Node-unblocker for Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/node-unblocker\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/node-unblocker\/","description":"<p>Web proxies help you keep your privacy and get around various restrictions while browsing the web. They hide your details, such as the request origin or IP address, and with additional software can even bypass things like rate limits.<\/p>\n<p><a href=\"https:\/\/github.com\/nfriedly\/node-unblocker\" target=\"_blank\" >node-unblocker<\/a> is one such web proxy that includes a form of Node.js library. You can use it for web scraping and accessing geo-restricted content, as well as other functions.<\/p>\n<p>In this article, you\u2019ll learn how to implement and use node-unblocker. You\u2019ll also see its pros, cons, and limitations as compared to a managed service like ScrapingBee.<\/p>"},{"title":"The Ultimate Guide to Web Scraping HTML for Beginners and Pros","link":"https:\/\/www.scrapingbee.com\/blog\/html-web-scraper\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/html-web-scraper\/","description":"<p>Are you wondering who the HTML web scraping works for? You're at the right place, as I'm about to give you a thorough explanation.<\/p>\n<p>Trust me, it\u2019s a game-changer for developers, data scientists, and businesses alike. HTML (HyperText Markup Language) is the backbone of every webpage you visit. It organizes content \u2013 from headings and paragraphs to images and links \u2013 into a format browsers can understand and display. Because of this universal structure, HTML is an excellent target for scraping. It\u2019s consistent, accessible, and filled with the data you want to extract.<\/p>"},{"title":"Top 5 SEO APIs in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/top-seo-apis\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/top-seo-apis\/","description":"<p>Search engine optimization (SEO) is an ever-evolving field that demands accurate, real-time data to make informed decisions. Whether you are a seasoned SEO professional or just starting your SEO journey, having access to reliable SEO data is crucial for improving your website's visibility and driving organic traffic.<\/p>\n<p>This is where SEO APIs come into play. They provide seamless access to search engine results page (SERP) data, keyword rankings, and other essential metrics, empowering you to optimize your SEO strategy efficiently.<\/p>"},{"title":"What Is a Transparent Proxy?","link":"https:\/\/www.scrapingbee.com\/blog\/what-is-a-transparent-proxy\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/what-is-a-transparent-proxy\/","description":"<p>Whether you're an individual user seeking improved online privacy or a network administrator striving to optimize network performance and security for your organization, understanding the nuances of web proxies is crucial. Web proxies are web servers that act as a gateway between a client application and the server it needs to communicate with.<\/p>\n<p>One such proxy that plays a vital role in network management and cybersecurity is a transparent proxy. Transparent proxies are used to set up content filtering and caching, protect from common cybersecurity attacks such as DDoS, and facilitate network traffic management.<\/p>"},{"title":"What is HTTP?","link":"https:\/\/www.scrapingbee.com\/blog\/what-is-http\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/what-is-http\/","description":"<p>Your browser uses it, as does your REST API. It connects you to your favorite restaurant whenever you order food online. It's built into your IoT gadget and allows you to unlock doors and adjust your living room temperature, when you are on the other side of the planet. And it's even used to occasionally tunnel other protocols - <strong>HTTP<\/strong><\/p>\n<p>But what exactly is HTTP? What does it do and how does it work? If you already read some of our other articles (e.g. <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-php\/#1-http-requests\" >Web Scraping with PHP<\/a>), you'll have already come across some details, but today we really want to go in-depth into what HTTP is.<\/p>"},{"title":"What is Web Scraping? How to Scrape Data From Any Website","link":"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping-and-how-to-scrape-any-website-tutorial\/","pubDate":"Tue, 13 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping-and-how-to-scrape-any-website-tutorial\/","description":"<p><a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Web Scraping<\/a> can be one of the most challenging things to do on the internet. In this tutorial we\u2019ll show you how to master Web Scraping and teach you how to extract data from any website at scale. We\u2019ll give you prewritten code to get you started scraping data with ease.<\/p>\n<h2 id=\"what-is-web-scraping\">What is Web Scraping?<\/h2>\n<p>Web scraping is the process of automatically extracting data from a website\u2019s HTML. This can be done at scale to visit every page on the website and download the valuable data you need, storing it in a database for later use. For example, you could regularly scrape or extract all the product prices from an e-commerce store to track changes in price so your business can change the price of your products accordingly to compete.<\/p>"},{"title":"Web scraping in C#: From basics to production-ready code (2026)","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-csharp\/","pubDate":"Mon, 12 Jan 2026 10:22:27 +0200","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-csharp\/","description":"<p>So, you wanna do <em>C# web scraping<\/em> without losing your sanity? This guide's got you! We'll go from zero to a working scraper that actually does something useful: fetching real HTML, parsing it cleanly, and saving the data to a nice CSV file.<\/p>\n<p>You'll learn how to use HtmlAgilityPack for parsing, CsvHelper for export, and ScrapingBee as your all-in-one backend that handles headless browsers, proxies, and JavaScript. Yeah, all the messy stuff nobody wants to deal with manually.<\/p>"},{"title":"Best Price Scraping Tools for 2026: Top Services Compared","link":"https:\/\/www.scrapingbee.com\/blog\/best-competitor-price-scraping-tools\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-competitor-price-scraping-tools\/","description":"<p>In today\u2019s fast-paced digital marketplace, price intelligence has become a cornerstone for businesses aiming to stay competitive. Accurate, up-to-date pricing data empowers companies to optimize their strategies, monitor competitors, and comply with pricing policies.<\/p>\n<p>As the demand for reliable price data grows, selecting the right price scraping tool is crucial. The best price scraping tools combine precision, speed, and resilience against anti-bot measures, enabling businesses to gather actionable insights without disruption.<\/p>"},{"title":"Getting Started with Goutte","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-goutte\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-goutte\/","description":"<p>While <a href=\"https:\/\/www.scrapingbee.com\/tutorials\/how-to-log-in-to-a-website-using-scrapingbee-with-nodejs\/\" >Node.js<\/a> and <a href=\"https:\/\/www.scrapingbee.com\/blog\/crawling-python\/\" >Python<\/a> dominate the web scraping landscape, Goutte is the go-to choice for PHP developers. It's a powerful library that provides a simple yet efficient solution to automatically extract data from websites.<\/p>\n<p>Whether you're a beginner or an experienced developer, Goutte allows you to effortlessly scrape data from websites and seamlessly display it on the frontend directly from your PHP scripts. Goutte also ensures that the scraping process doesn't compromise loading time or consume excessive backend resources such as RAM, making it an optimal choice for PHP-based scraping tasks.<\/p>"},{"title":"Getting Started with RSelenium","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-rselenium\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-rselenium\/","description":"<p>The value of unstructured data has never been more prominent than with the recent breakthrough of large language models such as <a href=\"https:\/\/www.scrapingbee.com\/features\/chatgpt\/\" target=\"_blank\" >ChatGPT<\/a> and Google Bard. Your organization can also capitalize on this success by building your own expert models. And what better way to collect droves of unstructured data than by scraping it?<\/p>\n<p>This article outlines how to scrape the web using R and a package known as <em>RSelenium<\/em>. RSelenium is a binding for the Selenium WebDriver, a popular web scraping tool with unmatched versatility. Selenium's interaction capabilities let you manipulate a web page before scraping its contents. This makes it one of the most popular <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> frameworks.<\/p>"},{"title":"How to make HTTP requests in Node.js with fetch API","link":"https:\/\/www.scrapingbee.com\/blog\/nodejs-fetch-api-http-requests\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/nodejs-fetch-api-http-requests\/","description":"<p>If you're looking for a clear <strong>Node.js fetch example<\/strong>, you're in the right place. Making HTTP requests is a core part of most Node.js apps, whether you're calling an API, fetching data from another service, or scraping web pages. The good news is that modern Node.js comes with a native Fetch API. For many use cases, you no longer need to install a separate HTTP client just to make requests. Fetch is built in, promise-based, and works almost the same way it does in the browser.<\/p>"},{"title":"How to Scrape TripAdvisor: Step-by-Step with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-tripadvisor\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-tripadvisor\/","description":"<p>Want to learn how to scrape TripAdvisor? Tired of overpaying for your trips? As one of the biggest online travel platforms, it has tons of valuable information that can help you save money and enjoy your time abroad.<\/p>\n<p>Scraping TripAdvisor is a great way to keep an eye on price changes, customer sentiment, and other details that can impact your trips and vacations. In this tutorial, we will explain how to extract hotel names, prices, ratings, and reviews from TripAdvisor using our <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> with Python.<\/p>"},{"title":"How to use AI for automated price scraping?","link":"https:\/\/www.scrapingbee.com\/blog\/ai-price-scraping\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ai-price-scraping\/","description":"<p>In order to perform price scraping, you need to know the CSS selector or the xPath for the target element. Therefore, if you are scraping thousands of websites, you need to manually figure out the selector for each of them. And if the page changes, you need to change that as well.<\/p>\n<p>Well, not anymore.<\/p>\n<p>Today, you are going to learn how to perform automated price scraping with AI. You are going to use the power of AI to automatically get the CSS selector of the elements you want to scrape, so that you can do it at scale.<\/p>"},{"title":"Minimum Advertised Price Monitoring with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/minimum-advertised-price-monitoring-with-scrapingbee\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/minimum-advertised-price-monitoring-with-scrapingbee\/","description":"<p>To uphold their brand image and protect profits, it's crucial for manufacturers to routinely monitor the advertised prices of their products. Minimum advertised price (MAP) monitoring helps brands check whether retailers are advertising their products below the minimum price set by the brand. This can prevent retailers from competing on product price, which can lead to a harmful race to the bottom. MAP monitoring helps brands identify and enforce their MAP policies. For instance, if a brand sets a MAP of $100 for a new cosmetic product, MAP monitoring would enable the company to identify and take action against retailers who advertise it for less than $100.<\/p>"},{"title":"Scrapy Cloud: Build Production-Ready Web Scrapers in 30 Minutes","link":"https:\/\/www.scrapingbee.com\/blog\/scrapy-cloud\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapy-cloud\/","description":"<p>Scrapy Cloud eliminates the need for managing your own servers while providing enterprise-grade infrastructure for your web scraping projects. If you\u2019ve been running Scrapy spiders locally and dealing with server maintenance, uptime monitoring, and scaling challenges, you\u2019re about to discover a much simpler approach. The Scrapy cloud platform transforms how developers deploy and manage their scrapers. As a result, you get everything from automated scheduling to real-time monitoring in one unified dashboard.<\/p>"},{"title":"The 11 best web scraping subreddits","link":"https:\/\/www.scrapingbee.com\/blog\/11-best-subreddits-for-webscraping\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/11-best-subreddits-for-webscraping\/","description":"<p>Web scraping is an essential skill for data analysts and developers who want to extract data from websites. However, finding reliable sources to learn and discuss web scraping techniques can be challenging. Fortunately, several subreddits on Reddit are dedicated to web scraping, data analysis, and programming-related discussion.<\/p>\n<p>In this article, we'll explore the 11 best subreddits for web scraping and share why each of these subreddits might be useful for you on your web scraping journey.<\/p>"},{"title":"Web scraping with R: From first script to production with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-r\/","pubDate":"Mon, 12 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-r\/","description":"<p><strong>Web scraping with R<\/strong> is a practical way to collect data from websites when APIs are missing, incomplete, or locked behind login pages. With the right tools, you can move far beyond fragile one-off scripts and build scrapers that are reliable, readable, and production-ready.<\/p>\n<p>This guide walks through a modern approach to web scraping with R, using <code>rvest<\/code> and <code>httr2<\/code> for parsing and requests, and ScrapingBee to handle the hard parts like JavaScript rendering, proxies, retries, and bot protection. You'll learn how to scrape static pages, work with JSON APIs, deal with pagination and logins, and handle JavaScript-heavy sites without guessing.<\/p>"},{"title":"Scraping E-Commerce Product Data","link":"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/","pubDate":"Sun, 11 Jan 2026 10:24:37 +0100","guid":"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/","description":"<p>In this tutorial, we are going to see how to extract product data from any E-commerce websites with Java. There are lots of different use cases for product data extraction, such as:<\/p>\n<ul>\n<li>E-commerce price monitoring<\/li>\n<li>Price comparator<\/li>\n<li>Availability monitoring<\/li>\n<li>Extracting reviews<\/li>\n<li>Market research<\/li>\n<li>MAP violation<\/li>\n<\/ul>\n<p>We are going to extract these different fields: Price, Product Name, Image URL, SKU, and currency from this product page:<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,\/9j\/2wCEAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDIBCQkJDAsMGA0NGDIhHCEyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMv\/AABEIAA0AFAMBIgACEQEDEQH\/xAGiAAABBQEBAQEBAQAAAAAAAAAAAQIDBAUGBwgJCgsQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29\/j5&#43;gEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoLEQACAQIEBAMEBwUEBAABAncAAQIDEQQFITEGEkFRB2FxEyIygQgUQpGhscEJIzNS8BVictEKFiQ04SXxFxgZGiYnKCkqNTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqCg4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2dri4&#43;Tl5ufo6ery8\/T19vf4&#43;fr\/2gAMAwEAAhEDEQA\/AOj&#43;Luv6n4f0OAWNy1tNcXRR3Q5bZ8zcHtnitH4XeILvWvCbXV1JPeTxStDuXHmMoKkE8gZ5q78SPCEPjCxS1kumtpIZBKkqpvwcEEEZHBB9a0fAPhO28H6L\/Z1vO87MxlklcAFmOOw6DgUCZ0trMZIdxgnj5xtlxn9Cam3H&#43;41OzRmgaP\/Z); background-size: cover\">\n <svg width=\"986\" height=\"622\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/scraping-e-commerce-product-data\/Screenshot-2019-04-03-15.56.02_hu2702013591613158523.jpg 825w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/Screenshot-2019-04-03-15.56.02_hu2702013591613158523.jpg\"\n width=\"986\" height=\"622\"\n alt='The North Face back pack'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/scraping-e-commerce-product-data\/Screenshot-2019-04-03-15.56.02_hu2702013591613158523.jpg 825w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/Screenshot-2019-04-03-15.56.02.jpg\"\n width=\"986\" height=\"622\"\n alt='The North Face back pack'>\n <\/noscript>\n<\/div>\n\n<br>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAAuklEQVR4nGL5\/\/8\/A7mAiWyd1NZ85dnTiYf3njp\/A807Hz99fvHiJZpiFgTzP8PFJ49XPr\/45t&#43;HP&#43;9&#43;mTFqQIT\/\/Plz5NjJX79&#43;vXn7TlZW2trCjImJCcPm\/\/\/PPnzw6tM7xu\/\/fbQMEMazsIiLi967\/\/Djp0\/SUpJwnag2MzFGmZj\/PPrTUUdTXVIS2XmCAgKaGmqMjIy83NzI4ozERNXfv3&#43;ZmZlBjH\/\/mJFsJkozLkBRVAECAAD\/\/yH0TBersOHxAAAAAElFTkSuQmCC); background-size: cover\">\n <svg width=\"1417\" height=\"707\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/scraping-e-commerce-product-data\/cover_hu217303111352206046.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/cover_hu217303111352206046.png\"\n width=\"1417\" height=\"707\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/scraping-e-commerce-product-data\/cover_hu217303111352206046.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/cover.png\"\n width=\"1417\" height=\"707\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"what-you-will-need\">What you will need<\/h2>\n<p>We will use HtmlUnit to perform the HTTP request and parse the DOM, add this dependency to your pom.xml.<\/p>"},{"title":"What is Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping\/","pubDate":"Sun, 11 Jan 2026 09:24:27 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping\/","description":"<h2 id=\"what-is-web-scraping\">What is Web Scraping?<\/h2>\n<p><a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Web scraping<\/a> has many names: web crawling, data extraction, web harvesting, and a few more.<\/p>\n<p>While there are subtle nuances between these terms, the overall idea is the same: <em>to gather data from a website, transform that data to a custom format, and persist it for later use<\/em><\/p>\n<p>Search engines are a great example for, both, web crawling and web scraping. They are continuously scouting the web, with the aim to create a &quot;library&quot; of sites and their content, so that when a user then searches for a particular search query they can easily and quickly provide a list of all sites on that particular topic. Just imagine a web without search engines \ud83d\ude28.<\/p>"},{"title":"How to put scraped website data into Google Sheets","link":"https:\/\/www.scrapingbee.com\/blog\/scrape-content-google-sheet\/","pubDate":"Sun, 11 Jan 2026 08:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrape-content-google-sheet\/","description":"<p>The process of scraping at scale can be challenging. You have to handle javascript rendering, <a href=\"https:\/\/www.scrapingbee.com\/blog\/introduction-to-chrome-headless\/\" >chrome headless<\/a>, captchas, and proxy configuration. Our <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >scraping tool<\/a> offers all the above in one API.<\/p>\n<p>Paired with <a href=\"https:\/\/www.make.com\/en\" target=\"_blank\" >Make<\/a> (formerly known as Integromat), we will build a no-code workflow to perform any number of actions with the scraped data. <a href=\"https:\/\/www.make.com\/en\" target=\"_blank\" >Make<\/a> allows you to design, build, and automate anything\u2014from tasks and workflows to apps and systems\u2014without coding.<\/p>"},{"title":"Python extract text from HTML: Library guide for developers","link":"https:\/\/www.scrapingbee.com\/blog\/parsel-python\/","pubDate":"Sun, 11 Jan 2026 08:10:27 +0200","guid":"https:\/\/www.scrapingbee.com\/blog\/parsel-python\/","description":"<p>If you need to <strong>Python extract text from HTML<\/strong>, this guide walks you through it step by step, without overcomplicating things. You'll learn what text extraction actually means, which Python libraries make it easy, and how to deal with real-world HTML that's messy, noisy, and inconsistent.<\/p>\n<p>We'll start simple with the basics, then move into practical examples, cleanup strategies, and a small end-to-end pipeline. By the end, you'll know how to turn raw HTML into clean, usable text you can store, analyze, or feed into other systems.<\/p>"},{"title":"Advanced Web Scraping: Hidden Techniques Pro Developers Actually Use","link":"https:\/\/www.scrapingbee.com\/blog\/advanced-web-scraping\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/advanced-web-scraping\/","description":"<p>Advanced web scraping isn\u2019t just about parsing HTML anymore. While beginners struggle with basic requests and BeautifulSoup, professional developers are solving complex scenarios that would make most scrapers fail instantly. We\u2019re talking about sites that load content through multiple AJAX requests, and hide data behind layers of JavaScript rendering.<\/p>\n<p>In my experience building scrapers for enterprise clients, I\u2019ve learned that the difference between amateur and professional web scraping lies in understanding three core challenges: scaling requests without getting blocked, handling pagination that deliberately tries to stop you, and extracting data from JavaScript-heavy pages.<\/p>"},{"title":"Airbnb web scraping with ScrapingBee: 2026 step-by-step guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-web-scrape-airbnb-data\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-web-scrape-airbnb-data\/","description":"<p><strong>Airbnb web scraping<\/strong> sounds scary at first, but it's actually pretty chill once you know what you're doing. In this guide, we'll walk through a real, working way to scrape Airbnb listings using ScrapingBee, without guessing, hacks, or magic steps.<\/p>\n<p>This is a practical, code-first tutorial. We'll start from a real Airbnb search results page and show how to extract structured listing data you can actually use. Descriptions, prices, ratings, and all the usual stuff you care about. We'll use Python, keep the setup simple, and focus on getting clean JSON output at the end. The same approach can be reused for other Airbnb searches with minimal changes, so once you get it, you're set.<\/p>"},{"title":"Best 10 Java Web Scraping Libraries","link":"https:\/\/www.scrapingbee.com\/blog\/best-java-web-scraping-libraries\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-java-web-scraping-libraries\/","description":"<p>In this article, I will show you the most popular Java web scraping libraries and help you choose the right one. Web scraping is the process of extracting data from websites. At first sight, you might think that all you need is a standard HTTP client and basic programming skills, right?<\/p>\n<p>In theory, yes, but quickly, you will face challenges like session handling, cookies, dynamically loaded content and JavaScript execution, and even anti-scraping measures (for example, CAPTCHA, IP blocking, and rate limiting).<\/p>"},{"title":"Best eBay Research Tools for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/must-have-ebay-research-tools\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/must-have-ebay-research-tools\/","description":"<p>Product research is the bedrock of a profitable eBay business. With increased competition and the rise of AI-driven storefronts, sellers can no longer rely on intuition to pick winning items. Success now requires a data-backed approach to identify high-demand niches, monitor competitor pricing, and optimize listing visibility.<\/p>\n<p>This guide compares the best eBay product research tools available today, ranging from comprehensive analytics dashboards and keyword planners to specialized dropshipping automation and <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >flexible scraping APIs<\/a> like ScrapingBee.<\/p>"},{"title":"Best Google Maps Scraper Tools in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-google-maps-scraper\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-google-maps-scraper\/","description":"<p>When extracting valuable business data from Google Maps, finding the best Google Maps scraper is crucial. Whether you\u2019re a developer, marketer, or data analyst, you want a tool that is reliable, flexible, and capable of handling the complexities of Google Maps scraping in 2026.<\/p>\n<p>If you want the very best tool, you've come to the right place. In this article, I will go through the top scrapers available, explain what makes them great, and ultimately show you what is the best Google Maps scraper.<\/p>"},{"title":"C# HTML parser guide: HtmlAgilityPack vs AngleSharp vs alternatives","link":"https:\/\/www.scrapingbee.com\/blog\/csharp-html-parser\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/csharp-html-parser\/","description":"<p>A <strong>C# HTML parser<\/strong> is a library that turns raw HTML into a structured DOM you can query. If you're scraping websites, monitoring content, or building internal tools, parsing HTML is unavoidable. The real question is which parser to use and how to use it without turning your setup into a mess.<\/p>\n<p>In this guide, we'll walk through the most common C# HTML parsers, explain where each one fits, and show how they work in a practical scraping workflow. The focus is on real-world usage, not theory. You'll see when a lightweight parser is enough, when a more browser-like DOM helps, and when full browser automation is overkill.<\/p>"},{"title":"Cloudscraper Python guide: Scrape Cloudflare sites step by step","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-websites-with-cloudscraper-python-example\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-websites-with-cloudscraper-python-example\/","description":"<p><strong>Cloudscraper Python<\/strong> is a popular package for scraping websites protected by Cloudflare without spinning up a full browser. It helps you bypass basic JavaScript challenges, handle cookies automatically, and get real HTML instead of those annoying block or &quot;checking your browser&quot; pages.<\/p>\n<p>In this guide, we'll break down how to set Cloudscraper up the right way, what it actually does under the hood, and where its hard limits are. You'll also learn when Cloudscraper is totally fine to use, and when it's smarter to switch to heavier, more reliable tools for production-grade scraping.<\/p>"},{"title":"Guide to Scraping E-commerce Websites","link":"https:\/\/www.scrapingbee.com\/blog\/guide-to-scraping-e-commerce-websites\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/guide-to-scraping-e-commerce-websites\/","description":"<p>Scraping e-commerce websites has become increasingly important for companies to gain a competitive edge in the digital marketplace. It provides access to vast amounts of product data quickly and efficiently. These sites often feature a multitude of products, prices, and customer reviews that can be difficult to review manually. When the data extraction process is automated, businesses can save time and resources while obtaining comprehensive and up-to-date information about their competitors' offerings, pricing strategies, and customer sentiment.<\/p>"},{"title":"How to make API calls using Python","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-make-python-api-calls\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-make-python-api-calls\/","description":"<p>This tutorial will show you how to make HTTP API calls using Python. There are many ways to skin a cat and there are multiple methods for making API calls in Python, but today we'll be demonstrating the <code>requests<\/code> library, making API calls to the hugely popular <a href=\"https:\/\/www.scrapingbee.com\/features\/chatgpt\/\" target=\"_blank\" >OpenAI ChatGPT API<\/a>.<\/p>\n<p>We'll give you a demo of the more pragmatic approach and experiment with their dedicated Software Development Kit (SDK) so you can easily integrate AI into your project. We'll also explain how to make API requests to our <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Web Scraping API<\/a> which will give you the power to pull data from any website into your project.<\/p>"},{"title":"How to scrape emails from a website with Python and ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-emails-from-any-website-for-sales-prospecting\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-emails-from-any-website-for-sales-prospecting\/","description":"<p>If you've ever tried to <strong>scrape emails from website<\/strong> pages by hand, you know how messy it can get. Some sites hide emails in <code>mailto:<\/code> links, others bury them in JavaScript, and a few try to obfuscate them entirely. Still, email remains one of the most reliable ways to reach partners, leads, or customers, and having a clean, targeted list can make a huge difference.<\/p>\n<p>The good news: scraping emails doesn't have to be painful. With a bit of Python and ScrapingBee handling the heavy lifting (HTML fetching, JS rendering, anti-bot stuff), you can pull contact info from real pages without juggling proxies or browser automation. And if coding isn't your thing, ScrapingBee also offers no-code and low-code options to get the job done.<\/p>"},{"title":"How to Scrape Job Postings with a Free AI Job Board Scraper","link":"https:\/\/www.scrapingbee.com\/blog\/build-job-board-web-scraping\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/build-job-board-web-scraping\/","description":"<p>The Job market is a fiercely competitive place and getting an edge in your search can mean the difference between success and failure, so many tech-savvy Job seekers turn to web-scraping Job listings to get ahead of the competition, enabling them to see new relevant Jobs as soon as they hit the market.<\/p>\n<p>Scraping Job listings can be an invaluable tool for finding your next role and in this tutorial, we\u2019ll teach you how to use our AI-powered Web Scraping API to harvest Job vacancies from any Job board with ease.<\/p>"},{"title":"How to scrape websites with Google Sheets","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-websites-with-google-sheets\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-websites-with-google-sheets\/","description":"<h2 id=\"using-google-sheets-for-scraping\">Using Google Sheets for Scraping<\/h2>\n<p>Web scraping, the process of extracting data from websites, has evolved into an indispensable tool for all kinds of industries, from market research to content aggregation. While programming languages like Python are often the go-to choice for scraping, a surprisingly efficient and accessible alternative is Google Sheets.<\/p>\n<p>Google Sheets is primarily known as a versatile spreadsheet application for creating, editing, and organizing data. However, it also offers some powerful web scraping capabilities that make it an attractive option, especially for individuals and organizations with minimal coding experience. With functions such as <a href=\"https:\/\/support.google.com\/docs\/answer\/3093342?hl=en&amp;ref_topic=9199554&amp;sjid=7580732861875045213-AP\" target=\"_blank\" >IMPORTXML<\/a> and <a href=\"https:\/\/support.google.com\/docs\/answer\/3093339?sjid=7580732861875045213-AP\" target=\"_blank\" >IMPORTHTML<\/a> that allow you to extract data from websites without writing any code, you can use Google Sheets as a web scraping tool.<\/p>"},{"title":"How to send a POST with Python Requests?","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-send-post-python-requests\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-send-post-python-requests\/","description":"<p>When you're working with APIs or automating web-related tasks, sooner or later you'll need to send data instead of just fetching it. That's where a <strong>POST request in Python<\/strong> comes in. It's the basic move for things like logging into a service, submitting a web form, or sending JSON to an API endpoint.<\/p>\n<p>Using the <code>requests<\/code> library keeps things clean and human-friendly. No browser automation, no Selenium gymnastics, no pretending to click buttons. You just send a POST request in Python, wait for the response, and continue on. It's readable, dependable, and more or less the default way most developers handle HTTP in Python these days.<\/p>"},{"title":"Infinite Scroll with Puppeteer","link":"https:\/\/www.scrapingbee.com\/blog\/infinite-scroll-puppeteer\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/infinite-scroll-puppeteer\/","description":"<p><strong><a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Web scraping<\/a> is automating the process of data collection from the web.<\/strong> This usually means deploying a \u201ccrawler\u201d that automatically searches the web and scrapes data from selected pages. Data collection through scraping can be much faster, eliminating the need for manual data-gathering, <em>and maybe mandatory if the website has no provided API<\/em>. Scraping methods change based on the website's data display mechanisms.<\/p>\n<p>One way to display content is through a one-page website, also known as a single-page application. Single-page applications (SPA) have become a trend, and with the implementation of infinite scrolling techniques, programmers can develop SPA that allows users to scroll <em>forever<\/em>. If you are an avid social media user, you have most likely experienced this feature before on platforms like Instagram, Twitter, Facebook, Pinterest, etc.<\/p>"},{"title":"Java headless browser guide: Run websites without a UI","link":"https:\/\/www.scrapingbee.com\/blog\/introduction-to-chrome-headless\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/introduction-to-chrome-headless\/","description":"<p>A <strong>Java headless browser<\/strong> lets you run and control real websites from Java without opening a visible browser window. It solves a problem many developers hit when simple HTTP requests stop working because pages rely on JavaScript, logins, or client-side rendering.<\/p>\n<p>In this guide, you will learn when a headless browser makes sense and when it does not. We will walk through the main tools available in Java, show how to set up a project, and build a working script step by step. You will also see how to handle common real-world challenges like authentication, single page applications, and AJAX-heavy pages.<\/p>"},{"title":"Kotlin web scraping: Learn how to extract data step by step","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-kotlin\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-kotlin\/","description":"<p><strong>Kotlin web scraping<\/strong> is a practical way to extract data from websites using a modern, JVM-based language. Developers choose Kotlin because it combines clean syntax, strong typing, and full access to the Java ecosystem, making scraping code easier to write and safer to maintain.<\/p>\n<p>In this guide, you will learn how web scraping works in Kotlin from the ground up. We will cover the tools you need, how to fetch and parse HTML, and how to extract real data using clear, step-by-step examples. By the end of the article, you will know how to build a working Kotlin scraper, understand when simple HTTP requests are enough, and recognize when more advanced solutions are needed for JavaScript-heavy or protected sites.<\/p>"},{"title":"Playwright web scraping: How to make your scripts faster","link":"https:\/\/www.scrapingbee.com\/blog\/playwright-web-scraping\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/playwright-web-scraping\/","description":"<p><strong>Playwright web scraping<\/strong> can be fast, reliable, and surprisingly simple if you know where the time actually goes. This guide breaks down the practical techniques that make Playwright scripts run quicker without turning them into fragile hacks.<\/p>\n<p>We'll cover setup choices, browser modes, navigation timing, resource blocking, parallel execution, and basic anti-bot strategies. Everything is focused on real performance wins, not theory. If you already use Playwright and want it to feel snappier in production, this article walks you through exactly how to do that.<\/p>"},{"title":"Puppeteer download file: 4 proven ways to save files in Node.js","link":"https:\/\/www.scrapingbee.com\/blog\/download-file-puppeteer\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/download-file-puppeteer\/","description":"<p>A <strong>Puppeteer download file<\/strong> task sounds simple until it breaks in real life. Some sites trigger real browser downloads. Others hide files behind JavaScript, redirects, or dynamic buttons. In many cases, your script clicks &quot;Download&quot; and exits before anything is saved. As soon as you move beyond toy examples, downloading files with Puppeteer becomes surprisingly tricky.<\/p>\n<p>This guide walks through four proven ways to handle a Puppeteer download file in Node.js. Each method solves a different problem, from simple button clicks to scalable, production-ready downloads. By the end, you'll know which pattern to use and why.<\/p>"},{"title":"Rust web scraping: Complete beginner guide","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-rust\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-rust\/","description":"<p><strong>Rust web scraping<\/strong> is about programmatically collecting data from websites using Rust's speed, safety, and async tooling. It matters because more products, prices, and public data live on the web, and developers need reliable ways to extract that data without fragile scripts or slow runtimes.<\/p>\n<p>In this guide, you'll learn how to scrape websites with Rust step by step. We'll start with a minimal setup for static pages, show how to parse and extract structured data, and then move into real-world cases like JavaScript-heavy sites and bot-protected marketplaces. You'll also see when it makes sense to switch from low-level scraping to a <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Web Scraping API<\/a>, and how Rust fits cleanly into that workflow.<\/p>"},{"title":"Web Scraping in C++ with libxml2 and libcurl","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-c++\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-c++\/","description":"<p>Web scraping is one of the rather important parts when it comes automated data extraction of web content. While languages like Python are commonly used, C++ offers significant advantages in performance and control. With its low-level memory management, speed, and ability to handle large-scale data efficiently, it is an excellent choice for web scraping tasks that demand high performance.<\/p>\n<p>In this article, we shall take a look at the advantages of developing our own custom web scraper in C++ and what its speed, resource efficiency, and scalability for complex scraping operations can bring to the table. You\u2019ll learn how to implement a web scraper with the <code>libcurl<\/code> and <code>libxml2<\/code> libraries.<\/p>"},{"title":"Web Scraping in Golang Tutorial With Quick Start Examples","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-go\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-go\/","description":"<p>In this article, you will learn how to create a simple web scraper using <a href=\"https:\/\/golang.org\/\" target=\"_blank\" >Go<\/a>.<\/p>\n<p>Robert Griesemer, Rob Pike, and Ken Thompson created the Golang programming language at Google, and it has been in the market since 2009. Go, also known as Golang, has many brilliant features. Getting started with Go is fast and straightforward. As a result, this comparatively newer language is gaining a lot of attraction in the developer world.<\/p>"},{"title":"Web scraping Java: From setup to production scrapers","link":"https:\/\/www.scrapingbee.com\/blog\/introduction-to-web-scraping-with-java\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/introduction-to-web-scraping-with-java\/","description":"<p><strong>Web scraping Java<\/strong> is (probably) harder than it should be. Making one HTTP request is easy. Building a scraper that survives pagination, JavaScript rendering, parallel requests, and blocking is where most Java projects fall apart.<\/p>\n<p>In this tutorial, you'll build a reliable scraper with Java 21, Jsoup, and ScrapingBee. We'll cover static scraping, pagination, parallel crawling, and the cases where Selenium still makes sense. And you'll do it without running your own proxies, CAPTCHAs, or headless browsers.<\/p>"},{"title":"Web Scraping with PHP Tutorial with Example Scripts (2026)","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-php\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-php\/","description":"<p>You might have seen one of our other tutorials on how to scrape websites, for example with <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-ruby\/\" target=\"_blank\" >Ruby<\/a>, <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/\" target=\"_blank\" >JavaScript<\/a> or <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/\" target=\"_blank\" >Python<\/a>, and wondered: what about <a href=\"https:\/\/w3techs.com\/technologies\/overview\/programming_language\" target=\"_blank\" >the most widely used server-side programming language for websites<\/a>, which, at the same time, is the <a href=\"https:\/\/insights.stackoverflow.com\/survey\/2020#technology-most-loved-dreaded-and-wanted-languages-dreaded\" target=\"_blank\" >one of the most dreaded<\/a>? Wonder no more - today it's time for <strong>PHP<\/strong> \ud83e\udd73!<\/p>\n<p>Believe it or not, PHP and web scraping have much in common: just like PHP, web scraping can be used either in a quick and dirty way or in a more elaborate fashion and supported with the help of additional tools and services.<\/p>"},{"title":"What are ISP proxies?","link":"https:\/\/www.scrapingbee.com\/blog\/isp-proxy\/","pubDate":"Sun, 11 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/isp-proxy\/","description":"<p>Proxies, intermediary servers that route your internet traffic, usually fall into three categories: datacenter, residential, and ISP. By definition, ISP proxies are affiliated with an internet service provider, but in fact, it\u2019s easier to see them as a combination of datacenter and residential proxies.<\/p>\n<p>Let\u2019s take a closer look at ISP proxies and see how they\u2019re particularly useful for web scraping.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAA20lEQVR4nGL5\/\/8\/A7mAiWydWDT\/&#43;\/fv9&#43;8\/mOo&#43;fPr64tW7f3\/&#43;Ydf879\/\/9&#43;8\/PHn6\/NyFS2\/fvvv3D0Xd3ksXN5w\/9evHb2RBFjjr\/\/9\/Z85deP323aePn9&#43;9e&#43;\/iZM\/EBDX63cfPrxh&#43;fGT\/&#43;&#43;HHN3FuNkZGRnSbmZmZDfR1\/\/\/7z8fHq6OtwcoKNffnz98rDh\/&#43;8P0rMxPT4mMHPnz8CtfCiBzaHz9&#43;&#43;vHzJxMTExMjo7CwEMxF\/28\/eibEy8POyvr49Rt1eWlmZmYsmkkFVI0qkgAgAAD\/\/5nTaFicH1ejAAAAAElFTkSuQmCC); background-size: cover\">\n <svg width=\"1200\" height=\"628\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/isp-proxy\/cover_hu18047000366882754256.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/isp-proxy\/cover_hu18047000366882754256.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/isp-proxy\/cover_hu18047000366882754256.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/isp-proxy\/cover.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"what-are-isp-proxies\">What are ISP Proxies?<\/h2>\n<p>ISP proxies are residential proxies hosted on a data center. With ISP proxies, you get the benefits of data center network speed, and the great reputation of residential IPs.<\/p>"},{"title":"5 Best Amazon Scraping Tools For Reliable Product Data","link":"https:\/\/www.scrapingbee.com\/blog\/amazon-scraper\/","pubDate":"Sat, 10 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/amazon-scraper\/","description":"<p>When it comes to scraping data from the Amazon website, choosing the right tool can make all the difference. Whether you want to extract specific data like product descriptions, prices, or customer reviews, or you need to gather data for competitor analysis, the right Amazon scraper can save you time and effort by automating your scraping tasks with just a few clicks.<\/p>\n<p>In this article, we\u2019ll explore the top five Amazon scraping tools, highlighting their features, strengths, and how they stack up against each other. My goal is to help you make an informed decision and introduce you to\u00a0ScrapingBee. This solution tops my list with plenty of additional features that make it a great Amazon product scraper.<\/p>"},{"title":"How to Build a Powerful Web Scraper in PowerShell (2026 Guide)","link":"https:\/\/www.scrapingbee.com\/blog\/powershell-web-scraping\/","pubDate":"Sat, 10 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/powershell-web-scraping\/","description":"<p>Building a web scraper in PowerShell is not as hard as it may sound. As a configuration and automation engine, Windows PowerShell has evolved far beyond simple system administration. In 2026, PowerShell Core, an advanced version with cross-platform properties and object-oriented support, offers robust web scraping capabilities that rival any modern web scraping tool.<\/p>\n<p>This ultimate guide will show you how to scrape data from any web page or HTML web page in a structured, efficient, and reliable way. You\u2019ll learn how to scrape web pages, make an api request or invoke the webrequest cmdlet, and export CSV files, all using simple, lightweight commands.<\/p>"},{"title":"How to execute JavaScript with Scrapy?","link":"https:\/\/www.scrapingbee.com\/blog\/scrapy-javascript\/","pubDate":"Sat, 10 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapy-javascript\/","description":"<p>Most modern websites use a client-side JavaScript framework such as React, Vue or Angular. Scraping data from a dynamic website without server-side rendering often requires executing JavaScript code.<\/p>\n<p>I\u2019ve scraped hundreds of sites, and I always use Scrapy. Scrapy is a popular Python web scraping framework. Compared to other Python scraping libraries, such as Beautiful Soup, Scrapy forces you to structure your code based on some best practices. In exchange, Scrapy takes care of concurrency, collecting stats, caching, handling retrial logic and many others.<\/p>"},{"title":"Ultimate Git and GitHub Tutorial with Examples","link":"https:\/\/www.scrapingbee.com\/blog\/ultimate-git-and-github-commands-tutorial-with-examples\/","pubDate":"Sat, 10 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ultimate-git-and-github-commands-tutorial-with-examples\/","description":"<p>In software development, <strong>Git and GitHub<\/strong> have become essential tools for managing and collaborating on code. In this guide, we'll learn how to use Git, a powerful version control system, and GitHub, the leading platform for hosting and sharing Git repositories.<\/p>\n<p>We will start by discussing Git and its most important terms. We'll cover basic Git commands and approaches and then move on to GitHub. Finally, we'll explore commands to work with GitHub repositories and answer some common questions. By the end of this article, you'll be familiar with both Git and GitHub and all the standard approaches. So, let's get started!<\/p>"},{"title":"Web Scraping with node-fetch","link":"https:\/\/www.scrapingbee.com\/blog\/node-fetch\/","pubDate":"Sat, 10 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/node-fetch\/","description":"<p>The introduction of the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/API\/Fetch_API\" target=\"_blank\" >Fetch API<\/a> changed how Javascript developers make HTTP calls. This means that developers no longer have to download third-party packages just to make an HTTP request. While that is great news for frontend developers, as <code>fetch<\/code> can only be used in the browser, backend developers still had to rely on different third-party packages. Until <code>node-fetch<\/code> came along, which aimed to provide the same fetch API that browsers support. In this article, we will take a look at how <code>node-fetch<\/code> can be used to help you scrape the web!<\/p>"},{"title":"XPath\/CSS Cheat Sheet","link":"https:\/\/www.scrapingbee.com\/blog\/xpath-css-cheat-sheet\/","pubDate":"Sat, 10 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/xpath-css-cheat-sheet\/","description":"<p>This cheat sheet provides a comprehensive overview of XPath and CSS selectors. It includes the most commonly used selectors and functions, along with examples to help you understand how they work.<\/p>\n<p>This cheat sheet is available to download as a <a href=\"cheatsheet.pdf\" >PDF file<\/a>.<\/p>\n<blockquote>\n<p>Sign up for <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" target=\"_blank\" >1000 free web scraping API credits<\/a> and try these selectors for free.<\/p>\n<\/blockquote>\n<h2 id=\"how-to-copy-an-xpath-selector-from-chrome-dev-tools\">How to copy an XPath selector from Chrome Dev Tools<\/h2>\n<ol>\n<li>Open Chrome Dev Tools (press F12 key or right-click on the webpage and select &quot;Inspect&quot;)<\/li>\n<li>Use the element selector tool to highlight the element you want to scrape<\/li>\n<li>Right-click the highlighted element in the Dev Tools panel<\/li>\n<li>Select &quot;Copy&quot; and then &quot;Copy XPath&quot;<\/li>\n<li>Paste the XPath expression into the code<\/li>\n<\/ol>\n<p><img src=\"copying-xpath-from-chrome-dev-tools.gif\" alt=\"Using Chrome developer tools to copy Target XPath\"><\/p>"},{"title":"5 Best Web Scraping Tools For Beginners in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-web-scraping-tools-for-beginners\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-web-scraping-tools-for-beginners\/","description":"<p>Web scraping is the automated process of extracting data from websites and turning it into a structured format like a spreadsheet or a database. Unless you're a developer, you might not want to learn how to code the process from scratch. That's where the best web scraping tools come into play.<\/p>\n<p>Yet, that doesn't mean they don't require coding at all. Users' choice comes down to a trade-off between &quot;no-code&quot; visual tools that let you click what you want and\u00a0<a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> solutions like ScrapingBee that handle the heavy technical lifting behind the scenes.<\/p>"},{"title":"5 Best Webmaster Unblockers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-web-unblockers\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-web-unblockers\/","description":"<p>These days, having a reliable web unblocker is essential for anyone managing websites, conducting SEO audits, or performing competitive research. Websites often implement sophisticated anti-bot systems and digital barriers like browser fingerprinting, geo restrictions, and cookie management. They do this to protect their public web data, avoid server overload, and protect privacy.<\/p>\n<p>These measures can block or limit access to valuable web data, making it challenging to collect information. That\u2019s why dependable webmaster unblockers are crucial. They provide seamless access to blocked content by intelligently bypassing geo-restrictions and unblocking websites without compromising data integrity or validation.<\/p>"},{"title":"ChatGPT Scraping - How to Vibe Scrape with ChatGPT","link":"https:\/\/www.scrapingbee.com\/blog\/chatgpt-scraping\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/chatgpt-scraping\/","description":"<p>LLMs such as ChatGPT have changed how developers write, review, and test code. The biggest testament to this is the rise of the term &quot;Vibe coding&quot;, which was coined by <a href=\"https:\/\/x.com\/karpathy\/status\/1886192184808149383\" target=\"_blank\" >Andrej Karpathy in an X post<\/a>. To quote the post:<\/p>\n<blockquote>\n<p>There's a new kind of coding I call &quot;vibe coding&quot;, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like &quot;decrease the padding on the sidebar by half&quot; because I'm too lazy to find it. I &quot;Accept All&quot; always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.\n~ Andrej Karpathy on X<\/p>"},{"title":"cURL JavaScript Guide: How to convert commands to JS","link":"https:\/\/www.scrapingbee.com\/blog\/a-javascript-developers-guide-to-curl\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/a-javascript-developers-guide-to-curl\/","description":"<p>JavaScript doesn't run <code>curl<\/code> commands directly, but converting so-called <em>cURL JavaScript snippets<\/em> into real code is easier than it looks. This guide walks you through the whole process: how cURL works, how to translate its flags into <code>fetch<\/code> or Axios, how to grab <code>curl<\/code> commands from your browser, and how to turn them into clean, modern JavaScript you can drop straight into your project.<\/p>\n<p>We'll keep everything simple and practical: short examples, clear steps, and tooling you can use right away.<\/p>"},{"title":"How to use a proxy with HttpClient in C#","link":"https:\/\/www.scrapingbee.com\/blog\/csharp-httpclient-proxy\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/csharp-httpclient-proxy\/","description":"<p>In this article, we'll walk through how to use a C# HttpClient proxy. HttpClient is built into .NET and supports async by default, so it's the standard way to send requests through a proxy.<\/p>\n<p>Developers often use proxies to stay anonymous, avoid IP blocks, or just control where the traffic goes. Whatever your reason, by the end of this article you'll know how to work with both authenticated and unauthenticated proxies in HttpClient.<\/p>"},{"title":"HTML Parsing in Java with JSoup","link":"https:\/\/www.scrapingbee.com\/blog\/java-parse-html-jsoup\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/java-parse-html-jsoup\/","description":"<p>It's a fine Sunday morning, and suddenly an idea for your next big project hits you: &quot;How about I take the data provided by company X and build a frontend for it?&quot; You jump into coding and realize that company X doesn't provide an API for their data. Their website is the only source for their data.<\/p>\n<p>It's time to resort to good old web scraping, the automated process to parse and extract data from the HTML source code of a website.<\/p>"},{"title":"Scrapy vs Selenium: Which one to choose","link":"https:\/\/www.scrapingbee.com\/blog\/scrapy-vs-selenium\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapy-vs-selenium\/","description":"<p>The Scrapy vs Selenium debate has been ongoing in the web scraping community for years. Both tools have carved out their own territories in the world of data extraction and web automation, but choosing between them can feel like picking between a race car and a Swiss Army knife, they\u2019re both excellent, just for different reasons.<\/p>\n<p>If you\u2019ve ever found yourself staring at a website wondering how to extract its data efficiently, you\u2019ve probably encountered these two powerhouses. Scrapy stands as the world\u2019s most popular open-source web scraping framework, while Selenium has established itself as the go-to solution for browser automation and testing. But which one should you reach for when your next project demands results?<\/p>"},{"title":"Serverless Web Scraping With Aws Lambda and Java","link":"https:\/\/www.scrapingbee.com\/blog\/serverless-web-scraping-with-aws-lambda-and-java\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/serverless-web-scraping-with-aws-lambda-and-java\/","description":"<p>Serverless is a term referring to the execution of code inside ephemeral containers (Function As A Service, or FaaS). It is a hot topic in 2019, after the \u201cmicro-service\u201d hype, here come the \u201cnano-services\u201d!<\/p>\n<p>Cloud functions can be triggered by different things such as:<\/p>\n<ul>\n<li>An HTTP call to a REST API<\/li>\n<li>A job in a message queue<\/li>\n<li>A log<\/li>\n<li>IOT event<\/li>\n<\/ul>\n<p>Cloud functions are a really good fit with web scraping tasks for many reasons. Web Scraping is I\/O bound, most of the time is spent waiting for HTTP responses, so we don\u2019t need high-end CPU servers. Cloud functions are cheap (first 1M request is free, then $0.20 per million requests) and easy to set up. Cloud functions are a good fit for parallel scraping, we can create hundreds or thousands of function at the same time for large-scale scraping.<\/p>"},{"title":"The Best JavaScript Web Scraping Libraries","link":"https:\/\/www.scrapingbee.com\/blog\/best-javascript-web-scraping-libraries\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-javascript-web-scraping-libraries\/","description":"<p>Ever need to pull data from websites \u2013 things like product details, news articles, or even just prices? Web scraping is your go-to, and luckily, JavaScript offers some nice tools for the job. Whether you're facing a simple HTML page or a dynamic interactive site, there's a library out there that can handle it.<\/p>\n<p>In this guide we'll dive into the best JavaScript web scraping tools that people are actually using in 2026. For each one, you'll get: a brief overview, a code snippet to get you started, as well as pros and cons.<\/p>"},{"title":"The Best Ruby HTTP clients","link":"https:\/\/www.scrapingbee.com\/blog\/best-ruby-http-clients\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-ruby-http-clients\/","description":"<p>How does one choose the perfect HTTP Client? The Ruby ecosystem offers a wealth of gems to make an HTTP request. Some are pure Ruby, some are based on Ruby's native <code>Net::HTTP<\/code>, and some are wrappers for existing libraries or Ruby bindings for libcurl. In this article, I will present the most popular gems by providing a short description and code snippets of making a request to the <a href=\"https:\/\/icanhazdadjoke.com\/\" target=\"_blank\" >Dad Jokes API<\/a>. The gems will be provided in the order from the most-downloaded one to the least. To conclude I will compare them all in a table format and provide a quick summary, as well as guidance on which gem to choose.<\/p>"},{"title":"Using Watir to automate web browsers with Ruby","link":"https:\/\/www.scrapingbee.com\/blog\/scraping-watir-ruby\/","pubDate":"Fri, 09 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scraping-watir-ruby\/","description":"<p>For years, it\u2019s been possible to automate simple tasks on a computer when those tasks have been executed using the command line. This is known as <em>scripting<\/em>. A bigger challenge, however, is to control the browser since a GUI introduces a lot more variability in how elements act.<\/p>\n<p><em>Browser automation<\/em> describes the process of programmatically performing certain actions in the browser (or handing these actions over to robots) that might otherwise be quite tedious or repetitive to be performed manually by a human.<\/p>"},{"title":"5 Best Free Web Scraping Tools for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-free-web-scraping-tools\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-free-web-scraping-tools\/","description":"<p>Whether you are a solo entrepreneur tracking competitor prices, a researcher gathering sentiment for an academic paper, or a developer building a new AI-driven application, the need to extract data from the web has never been higher. However, investing in a high-end scraping stack before you\u2019ve even validated your project can feel like a massive financial risk.<\/p>\n<p>This is where the search for the best free web scraping tools begins. Free web scrapers offer an excellent way to test your ideas, learn the ropes of data extraction, and build small-scale automation without a budget. However, it is essential to set realistic expectations from the start. &quot;Free&quot; almost always comes with caveats: limited page counts, restricted features, or a lack of managed infrastructure like proxies and CAPTCHA solvers. Many of the most popular tools on the market today are actually &quot;free-to-start&quot; trials or browser extensions with local execution limits.<\/p>"},{"title":"BeautifulSoup tutorial: Scraping web pages with Python","link":"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-beautiful-soup\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-beautiful-soup\/","description":"<p>The internet is an endless source of data, and for many data-driven tasks, accessing this information is critical. Thus, the demand for web scraping has risen exponentially in recent years, becoming an important tool for data analysts, machine learning developers, and businesses alike. Also, Python has become the most popular programming language for this purpose.<\/p>\n<p>In this detailed tutorial,\u00a0you'll learn how to access the data using popular libraries such as Requests and Beautiful Soup with CSS selectors.<\/p>"},{"title":"Charles proxy for web scraping","link":"https:\/\/www.scrapingbee.com\/blog\/charles-proxy\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/charles-proxy\/","description":"<p>Charles proxy is an HTTP debugging proxy that can inspect network calls and debug SSL traffic. With Charles, you are able to inspect requests\/responses, headers and cookies. Today we will see how to set up Charles, and how we can use Charles proxy for web scraping. We will focus on extracting data from Javascript-heavy web pages and mobile applications. Charles sits between your applications and the internet:<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAIAAADwazoUAAABOklEQVR4nGyR3Y7jIAyFbf6hUqSq7\/9&#43;vahaKWkKjvlZbbybiWbmuwJzjvEB83g8vPcAgIi9d2ttCKG1dr\/ftdaIWEqZpul6vQLAGCPnXGsVGS7LklJCRABorfXeEVEpxcy99zGGMUZrrZRCRCKy1mqte&#43;\/btuE8z0QEAOu6xhinaZrnOcYYQvh7jJhSIqL3&#43;x1jZGYiajuXywXXdU0pwU6ttbUmKWRIiQP\/yTk75xCxtcbMhpmXZTnUIYRDerYJzjkikrr3HqW9BGbms1mKWutj&#43;3w&#43;JZH4v8wAsG1brdU5Z4wBAN6xOwBQStFay\/rfaGezTE5EOWfvvVLKWltKsdbijjQ9MN9SIWIIYYzxer167\/Ir3vvb7fbzCb7ffE4rEYjIOaeU&#43;qn5pSTonc\/nY6391QkAfwIAAP\/\/0pq\/xRrmHWcAAAAASUVORK5CYII=); background-size: cover\">\n <svg width=\"759\" height=\"419\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/charles-proxy\/charles_drawing.png 759 '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/charles-proxy\/charles_drawing.png\"\n width=\"759\" height=\"419\"\n alt='Charles proxy drawing'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/charles-proxy\/charles_drawing.png 759'\n src=\"https:\/\/www.scrapingbee.com\/blog\/charles-proxy\/charles_drawing.png\"\n width=\"759\" height=\"419\"\n alt='Charles proxy drawing'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<p>Charles is like the Chrome dev tools on steroids. It has many incredible features:<\/p>"},{"title":"Getting Started with MechanicalSoup","link":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-mechanicalsoup\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/getting-started-with-mechanicalsoup\/","description":"<p>Python is a popular choice for web-scraping projects, owing to how easy the language makes scripting and its wide range of scraping libraries and frameworks. <a href=\"https:\/\/mechanicalsoup.readthedocs.io\/\" target=\"_blank\" >MechanicalSoup<\/a> is one such library that can help you set up web scraping in Python quite easily.<\/p>\n<p>This Python browser automation library allows you to simulate user actions on a browser like the following:<\/p>\n<ul>\n<li>Filling out forms<\/li>\n<li>Submitting data<\/li>\n<li>Clicking buttons<\/li>\n<li>Navigating through pages<\/li>\n<\/ul>\n<p>One of the key features of MechanicalSoup is that its stateful browser can retain state and track state changes between requests. This helps simplify browser automation scripts in complex use cases, such as handling forms and dynamic content. MechanicalSoup also comes prebundled with <a href=\"https:\/\/pypi.org\/project\/beautifulsoup4\/\" target=\"_blank\" >Beautiful Soup<\/a>, a popular Python library for parsing and manipulating web page content. Using MechanicalSoup and Beautiful Soup, you can write complex scraping scripts easily.<\/p>"},{"title":"How to extract data from a website? Ultimate guide to pull data from any website","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-extract-data-from-a-website\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-extract-data-from-a-website\/","description":"<p>The web is becoming an incredible data source. There are more and more data available online, from user-generated content on social media and forums, E-commerce websites, real-estate websites or media outlets... Many businesses are built on this web data, or highly depend on it.<\/p>\n<p>Manually extracting data from a website and copy\/pasting it to a spreadsheet is an error-prone and time consuming process. If you need to scrape millions of pages, it's not possible to do it manually, so you should automate it.<\/p>"},{"title":"How to Master Web Scraping Pagination: Hidden Techniques Experts Use","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-pagination\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-pagination\/","description":"<p>Mastering web scraping pagination is the difference between collecting just a handful of records and extracting complete datasets that drive real business value. Whether you\u2019re dealing with e-commerce product listings, job boards, or news sites, pagination presents unique challenges that separate amateur scrapers from professional data extraction systems.<\/p>\n<p>In this guide, you\u2019ll discover the hidden techniques that experts use to handle different types of pagination in web scraping projects, from static next buttons to infinite scroll implementations. I\u2019ll show you working Python examples and explain how ScrapingBee simplifies pagination scraping for dynamic sites that would otherwise require complex browser automation.<\/p>"},{"title":"Mapping the Funniest US States on Reddit using AI","link":"https:\/\/www.scrapingbee.com\/blog\/funniest-us-states-on-reddit\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/funniest-us-states-on-reddit\/","description":"<p>Reddit is a unique social media platform that works on upvotes rather than likes and followers. Needless to say, jokes are very important contributors to Reddit's upvote economy. To add to this, most users use the platform anonymously and miss no opportunity to crack a dad joke whenever they can.<\/p>\n<p>In a previous article, we analyzed and ranked country <a href=\"https:\/\/www.scrapingbee.com\/blog\/global-subreddit-humor-analysis-with-ai\/\" target=\"_blank\" >subreddits for humorous comments<\/a>. The USA was one of the top countries in terms of the percentage of attempted jokes. In this article, we drill down further and repeat the same analysis across the states of the USA. For each state, we obtained all the comments from the top 50 threads of this year. Then we ran the top-level comments through AI (Mistral 7B) to classify them as &quot;joke&quot; or &quot;not joke&quot;, with the thread topic in context.<\/p>"},{"title":"Mastering the Python curl request: A practical guide for developers","link":"https:\/\/www.scrapingbee.com\/blog\/python-curl\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-curl\/","description":"<p>Mastering the Python curl request is one of the fastest ways to turn API docs or browser network calls into working code. Instead of rewriting everything by hand, you can drop curl straight into Python, or translate it into Requests or PycURL for cleaner, long-term projects.<\/p>\n<p>In this guide, we'll show practical ways to run curl in Python, when to use each method (subprocess, PycURL, Requests), and how ScrapingBee improves reliability with proxies and optional JavaScript rendering, so you can ship scrapers that actually work.<\/p>"},{"title":"Scraping with Nodriver: Step by Step Tutorial with Examples","link":"https:\/\/www.scrapingbee.com\/blog\/nodriver-tutorial\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/nodriver-tutorial\/","description":"<p>If you've used Python <a href=\"https:\/\/www.scrapingbee.com\/blog\/selenium-python\/\" target=\"_blank\" >Selenium for web scraping<\/a>, you're familiar with its ability to extract data from websites. However, the default webdriver (ChromeDriver) often struggles to bypass anti-bot mechanisms. As a solution, you can use <a href=\"https:\/\/www.scrapingbee.com\/blog\/undetected-chromedriver-python-tutorial-avoiding-bot-detection\/\" target=\"_blank\" >undetected_chromedriver<\/a> to bypass some of today's most sophisticated anti-bot systems, including those from Cloudflare and Akamai.<\/p>\n<p>However, it's important to note that undetected_chromedriver has limitations against advanced anti-bot systems. This is where <strong>Nodriver<\/strong>, its official successor, comes in.<\/p>"},{"title":"Top 5 Best News API solutions in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/top-best-news-apis-for-you\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/top-best-news-apis-for-you\/","description":"<p>These days, having access to fresh and historical news data from diverse news sources is crucial for developers, businesses, and media professionals alike. Whether you are building a real-time news dashboard, monitoring media trends, or conducting research, choosing the right News API can make all the difference.<\/p>\n<p>In this article, I will help you quickly identify the best news APIs available in 2026. Among the options, ScrapingBee\u2019s <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/news-results-api\/\" target=\"_blank\" >News Results API<\/a> stands out as a top choice for its reliability, free plan availability, and developer-friendly approach. But if you want to weigh all the options, I will compare their key features and decide which one best fits your use case.<\/p>"},{"title":"Web Scraping vs API: What\u2019s the Difference?","link":"https:\/\/www.scrapingbee.com\/blog\/api-vs-web-scraping\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/api-vs-web-scraping\/","description":"<p>Ever found yourself staring at a website, desperately wanting to extract all that data, but wondering whether you should build a scraper or get an API? The web scraping vs API debate is one of the most common questions in data extraction. Honestly, it\u2019s a fair question that deserves a proper answer.<\/p>\n<p>Both approaches have their place in the modern data landscape, but understanding the difference between web scraping and API methods can save you time, money, and countless headaches. In this article I'll help find the best approach for you.<\/p>"},{"title":"What is a characteristic of the REST API? Full guide for beginners","link":"https:\/\/www.scrapingbee.com\/blog\/six-characteristics-of-rest-api\/","pubDate":"Thu, 08 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/six-characteristics-of-rest-api\/","description":"<p>If you've ever looked up <strong>what is a characteristic of the REST API<\/strong>, you've probably seen answers that are either too shallow or way too academic. Let's keep it simple.<\/p>\n<p><em>REST<\/em> came from Dr. Roy Fielding's 2000 dissertation. It's been around for decades and still powers a huge part of the web. The funny part is that many developers use REST all the time but can't quite list the core characteristics that make a REST API actually RESTful. It's a common gap.<\/p>"},{"title":"5 Best Article Scrapers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-article-scraper\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-article-scraper\/","description":"<p>Looking for the best article scraper in 2026? You've come to the right place. I've personally tested dozens of web scrapers, both free and paid options. Here's what I realized: web scraping is more relevant than ever before.<\/p>\n<p>In today\u2019s fast-paced digital world, the ability to extract data efficiently from web pages is crucial for businesses, researchers, and developers alike. Whether you want to scrape data from news websites, job postings, or multiple pages of complex websites, having the right article scraper can save you time and effort.<\/p>"},{"title":"Best Language for Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/best-language-for-web-scraping\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-language-for-web-scraping\/","description":"<p>Ever stared at a data-rich website and wondered how to pull it out cleanly and fast? To acomplish this mission, you need to pick the best language for web scraping. But the process can feel a bit confusing. Python\u2019s hype, JavaScript\u2019s ubiquity, and a dozen others languages makes it hard to pick the right one.<\/p>\n<p>After years building scrapers, I\u2019ve watched teams burn time by matching the wrong tool to the job. Today\u2019s web is trickier: JavaScript-heavy UIs, dynamic rendering, rate limits, and sophisticated anti-bot systems. Your stack needs to navigate headless browsers, async flows, and resilience, without turning maintenance into a grind.<\/p>"},{"title":"Best Social Media Scraping Tools for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/top-social-media-scraper-apis\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/top-social-media-scraper-apis\/","description":"<p>In 2026, social media data has moved far beyond simple &quot;vanity metrics.&quot; It is now the primary fuel for high-performance AI models, real-time market sentiment analysis, and predictive brand monitoring. As platforms implement increasingly sophisticated anti-bot measures, the need for robust social media scrapers has never been higher. Whether you are a developer building a custom analytics pipeline or a researcher tracking global trends, choosing the right social media scraping tools is the difference between getting blocked and getting insights.<\/p>"},{"title":"How to bypass error 1005 'access denied, you have been banned' when scraping","link":"https:\/\/www.scrapingbee.com\/blog\/bypass-error-1005-access-denied-you-have-been-banned\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/bypass-error-1005-access-denied-you-have-been-banned\/","description":"<p>When scraping websites protected by Cloudflare, encountering Error 1005 \u2014 &quot;Access Denied, You Have Been Banned&quot; \u2014 is a common challenge. This error signifies that your IP address has been blocked, usually due to Cloudflare's security mechanisms that aim to prevent scraping and malicious activities. However, there are various techniques you can use to bypass this error and continue your scraping operations.<\/p>\n<p>In this guide, we'll focus on specific strategies and tools to bypass Cloudflare Error 1005, helping you to scrape websites efficiently without getting blocked.<\/p>"},{"title":"How to Easily Scrape Shopify Stores With AI","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-easily-scrape-shopify-stores-with-ai\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-easily-scrape-shopify-stores-with-ai\/","description":"<p>Scraping Shopify stores can be a challenging task because each store uses a unique theme and layout, making traditional scrapers with rigid selectors unreliable. That\u2019s why we'll be showing you how to leverage an <a href=\"https:\/\/www.scrapingbee.com\/features\/ai-web-scraping-api\/\" target=\"_blank\" >AI-powered web scraper<\/a> that easily adapts to any page structure, effortlessly extracting Shopify e-commerce data no matter how the store is designed.<\/p>\n<p>In this tutorial, we\u2019ll be using our Python <a href=\"https:\/\/www.scrapingbee.com\/documentation\/#getting-started\" target=\"_blank\" >Scrapingbee client<\/a> to scrape one of the most successful Shopify stores on the planet; <a href=\"http:\/\/gymshark.com\" target=\"_blank\" >gymshark.com<\/a>, to obtain all the product page URLs and the corresponding product details from each product page. We\u2019ve previously written blogs about scraping product listing pages <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-scrapy\/#scraping-a-single-product\" target=\"_blank\" >using Scrapy<\/a> or <a href=\"https:\/\/www.scrapingbee.com\/blog\/scraping-e-commerce-product-data\/\" target=\"_blank\" >using schema.org metadata<\/a>. We\u2019ll also be using <a href=\"https:\/\/www.scrapingbee.com\/documentation\/#ai_query\" target=\"_blank\" >our AI query feature<\/a> to extract structured data from each product page without parsing any HTML. Please note that we\u2019re using Python only for demonstration and this technique and our API will work with any programming language.<\/p>"},{"title":"How to Scrape Amazon Prices with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-amazon-prices\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-amazon-prices\/","description":"<p>Learning how to scrape Amazon prices is a great way to access real-time product data for market research, competitor analysis, and price tracking. However, as the biggest retailer in the world, Amazon imposes many scraping restrictions to keep automated connections away from its sensitive price intelligence.<\/p>\n<p>The Amazon page uses dynamic JavaScript elements, aggressive anti-bot systems, and geo-based restrictions that make it difficult to extract price data. This tutorial will show you how to extract Amazon product prices with Python and our powerful API, because not every web scraper can handle data from Amazon. And if you also need to collect customer feedback alongside pricing, our <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/amazon-review-api\/\" target=\"_blank\" >Amazon Review Scraper API<\/a> provides an easy way to extract review data at scale.<\/p>"},{"title":"Stop Getting Blocked: Master Web Scraping Headers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-headers\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-headers\/","description":"<p>Web scraping headers are the key to successful data extraction. In my experience, mastering these HTTP headers is often what separates successful scraping projects from those that get blocked after a few requests.<\/p>\n<p>In this guide, I will walk you through using optimized headers in your Python web scraping projects to reduce blocks and make your requests look like genuine browser traffic. It's a skill that\u2019s more crucial than ever in 2026\u2019s increasingly sophisticated web environment. As you\u2019ll see, the most common HTTP headers aren\u2019t just \u201cnice to have\u201d, they\u2019re the foundation of reliable data collection from web pages and HTTPS websites. Let's dive right in.<\/p>"},{"title":"Using the Cheerio NPM Package for Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/cheerio-npm\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/cheerio-npm\/","description":"<p>Have you ever manually copied data from a table on a website into an excel spreadsheet so you could analyze it? If you have, then you know how tedious of a process it can be. Fortunately, there's a tool that allows you to easily scrape data from web pages using Node.js. You can use <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> to collect data from just about any HTML. You can pull data out of HTML strings or crawl a website to collect product data.<\/p>"},{"title":"Web Scraping vs Web Crawling: Ultimate Guide","link":"https:\/\/www.scrapingbee.com\/blog\/scraping-vs-crawling\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scraping-vs-crawling\/","description":"<p>There are many ways that businesses and individuals can gather information about their customers and web crawling and web scraping are some of the most common approaches. You'll hear these terms used interchangeably, but they are <em>not<\/em> the same thing.<\/p>\n<p>In this article, we'll go over the differences between web scraping and web crawling and how they relate to each other. We will also cover some use cases for both approaches and tools you can use.<\/p>"},{"title":"Web Scraping with Ruby","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-ruby\/","pubDate":"Wed, 07 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-ruby\/","description":"<p>In this tutorial we're diving into the world of web scraping with Ruby. We'll explore powerful Gems like Faraday for HTTP requests, Nokogiri for parsing HTML, and browser automation with Selenium and Capybara. Along the way, we'll scrape real websites with some example scripts, handle dynamic Javascript content and even run headless browsers in parallel.<\/p>\n<p>By the end of this tutorial, you'll be equipped with the knowledge and practical patterns needed to start scraping data from websites \u2014 whether for fun, research, or building something cool.<\/p>"},{"title":"5 Best eBay Price Trackers in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/ebay-price-tracker\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ebay-price-tracker\/","description":"<p>In the fast-moving eBay e-commerce environment, staying ahead often means tracking pricing data. This process allows you to find the\u00a0best deals, monitor\u00a0competitive pricing, or optimize your\u00a0sales performance.<\/p>\n<p>But here's the catch: you need a reliable eBay price tracker to get ahead. Don't know what that is? Don't worry, in this article, I'll explain everything you need to know about the best eBay price trackers. Spoiler alert: after running some tests, I realized that ScrapingBee is the best tool available in 2026. This API-driven solution leads the pack with complete, accurate, and scalable price tracking. Want to know what other options are? Keep reading!<\/p>"},{"title":"An Automatic Bill Downloader in Java","link":"https:\/\/www.scrapingbee.com\/blog\/an-automatic-bill-downloader-in-java\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/an-automatic-bill-downloader-in-java\/","description":"<p>In this article, I am going to show how to download bills (or any other file ) from a website with HtmlUnit.<\/p>\n<p>I suggest you read these articles first: Introduction of <a href=\"https:\/\/www.scrapingbee.com\/blog\/introduction-to-web-scraping-with-java\/\" >how to do web scraping with Java<\/a> and <a href=\"https:\/\/www.scrapingbee.com\/blog\/how-to-log-in-to-almost-any-websites\/\" >Autologin<\/a><\/p>\n<p>Since I am hosting this blog on <a href=\"https:\/\/m.do.co\/c\/0e940b26444e\" target=\"_blank\" >Digital Ocean<\/a> (10$ in credit if you sign up via this link), I will show you how to write a bot to automatically download every bill you have.<\/p>"},{"title":"Best E-Commerce Web Scraping Tools for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-e-commerce-product-scrapers-for-enterprise\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-e-commerce-product-scrapers-for-enterprise\/","description":"<p>In the hyper-competitive landscape, data is the primary engine of e-commerce growth. Real-time access to product listings, competitor pricing, and inventory levels has shifted from a &quot;nice-to-have&quot; to a critical operational requirement.<\/p>\n<p>For brands and retailers, the ability to monitor thousands of SKUs across multiple global marketplaces is only possible through high-scale <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping APIs<\/a>. These tools allow businesses to bypass the manual labor of data collection by automating the extraction process, turning messy HTML into structured, actionable insights.<\/p>"},{"title":"How To Build a Real Estate Web Scraper","link":"https:\/\/www.scrapingbee.com\/blog\/real-estate-web-scraping\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/real-estate-web-scraping\/","description":"<p>The real estate market moves fast. Property listings appear and disappear within hours, prices fluctuate based on market conditions, and tracking availability across multiple platforms manually becomes an impossible task. For developers, investors, and real estate agents who need to stay ahead of market trends, building a real estate web scraper offers the solution to automate data collection from sites like Redfin, Idealista, or <a href=\"http:\/\/Apartments.com\" target=\"_blank\" >Apartments.com<\/a>. Instead of spending hours on manual data entry, you can focus on analyzing insights and making informed decisions based on fresh, accurate market data.<\/p>"},{"title":"How to Scrape Financial Statements with Python: A Practical Guide for Beginners","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-for-financial-statements-with-python\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-for-financial-statements-with-python\/","description":"<p>If you're an investor, analyst, or developer working in the finance industry, you should know how to scrape financial statements with Python. It's a great way to monitor the current stock price, keep the pulse on market trends, and make informed financial decisions. After all, financial markets are prone to fluctuations, so you simply can't waste time gathering financial data manually.<\/p>\n<p>In this practical guide, we\u2019ll walk you through everything you need to know about web scraping for financial statements with Python, from basic setup to advanced automation techniques. We\u2019ll cover the essential tools, legal considerations, and step-by-step implementation that transforms raw SEC filings into structured, analyzable data.<\/p>"},{"title":"How to Scrape Yahoo: Step-by-Step Tutorial","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-yahoo\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-yahoo\/","description":"<p>Scraping Yahoo search results and finance data is a powerful way to collect real-time insights on market trends, stock performance, and company profiles. With ScrapingBee, you can extract this information easily \u2014 even from JavaScript-heavy pages that typically block traditional scrapers.<\/p>\n<p>Yahoo\u2019s dynamic content and anti-bot protections make it difficult to scrape using basic tools. But ScrapingBee handles these challenges out of the box. Our API automatically renders JavaScript, rotates proxies, and bypasses bot detection to deliver clean, structured data from both Yahoo Search and Yahoo Finance.<\/p>"},{"title":"How to use undetected_chromedriver (plus working alternatives)","link":"https:\/\/www.scrapingbee.com\/blog\/undetected-chromedriver-python-tutorial-avoiding-bot-detection\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/undetected-chromedriver-python-tutorial-avoiding-bot-detection\/","description":"<p>If you've used <a href=\"https:\/\/www.scrapingbee.com\/blog\/selenium-python\/\" target=\"_blank\" >Python Selenium for web scraping<\/a>, you're familiar with its ability to extract data from websites. However, the default webdriver (ChromeDriver) often struggles to bypass the anti-bot mechanisms websites use to detect and block scrapers. With undetected_chromedriver, you can bypass some of today's most sophisticated anti-bot mechanisms, including those from Cloudflare, Akamai, and DataDome.<\/p>\n<p>In this blog post, we\u2019ll guide you on how to make your Selenium web scraper less detectable using undetected_chromedriver. We\u2019ll cover its usage with proxies and user agents to enhance its effectiveness and troubleshoot common errors. Furthermore, we\u2019ll discuss the limitations of undetected_chromedriver and suggest better alternatives.<\/p>"},{"title":"Scrapy Playwright Tutorial: How to Scrape Dynamic Websites","link":"https:\/\/www.scrapingbee.com\/blog\/scrapy-playwright-tutorial\/","pubDate":"Tue, 06 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scrapy-playwright-tutorial\/","description":"<p>Playwright for Scrapy enables you to scrape javascript heavy dynamic websites at scale, with advanced web scraping features out of the box.<\/p>\n<p>In this tutorial, we\u2019ll show you the ins and outs of scraping using this popular browser automation library that was originally invented by Microsoft, combining it with Scrapy to extract the content you need with ease.<\/p>\n<p>We\u2019ll cover jobs to be done such as setting up your <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/\" target=\"_blank\" >Python<\/a> environment, inputting and submitting form data, all the way through to dealing with infinite scroll and scraping multiple pages.<\/p>"},{"title":"5 Best AI Web Scraping Tools in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-ai-web-scrapers\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-ai-web-scrapers\/","description":"<p>In an era where AI dominates almost everything, AI-powered web scrapers are hardly surprising. These cutting-edge tools excel at navigating complex websites, gathering structured data for SEO analysis, and monitoring multiple URLs simultaneously. By automating the data collection process, AI scrapers overcome traditional challenges, making web data extraction more efficient and accessible than ever before.<\/p>\n<p>A perfect example of such tools is ScrapingBee, which has adopted this technological advancement. As a result, it's now a reliable, innovative solution that simplifies web scraping tasks through its intelligent AI features. What sets this scraping tool apart? It has to be its user-friendly API and versatile no-code platform options, democratizing web scraping for a diverse user base.<\/p>"},{"title":"A Guide To Web Scraping For Data Journalism","link":"https:\/\/www.scrapingbee.com\/blog\/a-guide-to-web-scraping-for-data-journalism\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/a-guide-to-web-scraping-for-data-journalism\/","description":"<p>Web scraping may not sound much like a traditional journalistic practice but, in fact, it is a valuable tool that can allow journalists to turn almost any website into a powerful source of data from which they can build and illustrate their stories. Demand for these kinds of skills is on the increase, and this guide will explain some of the different techniques that can be used to gather data through web scraping and how it can be used to fuel incisive data journalism.<\/p>"},{"title":"Best Shopify Web Scraping Tools for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-shopify-web-scraping-tools\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-shopify-web-scraping-tools\/","description":"<p>Shopify continues to dominate the market, powering millions of stores from boutique artisans to global giants. For retailers, brands, and market analysts, having a pulse on this ecosystem is no longer optional; it's a requirement for survival. Whether you are monitoring a competitor's flash sales, tracking inventory shifts, or performing large-scale market research, you need structured, real-time data.<\/p>\n<p>But here's the challenge: Shopify stores have become increasingly sophisticated in their anti-bot defenses. Simple scripts that worked years ago now face instant IP bans, CAPTCHAs, and complex JavaScript rendering hurdles. This has led to a surge in specialized Shopify web scraping tools designed to bypass these barriers.<\/p>"},{"title":"Best User Agent List for Scraping & How to Rotate Them Effectively","link":"https:\/\/www.scrapingbee.com\/blog\/list-of-user-agents-for-scraping\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/list-of-user-agents-for-scraping\/","description":"<p>User agents are the browser identifiers that ride along with every HTTP request. In scraping, rotating realistic user agents helps reduce soft-blocks and CAPTCHA while improving reliability across diverse targets.<\/p>\n<p>In this guide, I'll walk you through an updated\u00a02026 list of user agents for web scraping, show rotation patterns that actually work, and explain how a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Scraping API<\/a>\u00a0like\u00a0ScrapingBee\u00a0automates the whole job. By the end, you\u2019ll know the best user agents for web scraping, how to manage them manually, and when to let an API handle them for you.<\/p>"},{"title":"How to scrape data from idealista","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-idealista\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-idealista\/","description":"<p>Idealista is a very famous listing website that lists millions of properties for sale and\/or rent. It is available in Spain, Portugal, and Italy. Such property listing websites are among the best ways to do market research, analyze market trends, and find a suitable place to buy. In this article, you will learn how to scrape data from idealista. The website uses anti-web scraping techniques and you will learn how to circumvent them as well.<\/p>"},{"title":"How to Scrape Yellow Pages with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-yellow-pages\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-yellow-pages\/","description":"<p>Learning how to scrape Yellow Pages can unlock access to a rich database of business listings. With minimal technical knowledge, our approach to scraping HTML content extracts data that you can use for lead generation, market research, or local SEO.<\/p>\n<p>Like most online platforms rich with useful coding data, Yellow Pages present JavaScript-rendered content and anti-scraping measures, which often stop traditional scraping efforts. Our HTML API is built to export data while automatically handling restrictions by loading dynamic content and implementing smart proxy rotation to ensure consistent access with minimal coding skills.<\/p>"},{"title":"Top Web Scraping Challenges in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-challenges\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-challenges\/","description":"<p>Top web scraping challenges have evolved dramatically from the simple days of parsing static HTML. I\u2019ve been building scrapers for years, and let me tell you \u2013 even simple tasks have turned into a complex chess match between developers and websites. From sophisticated CAPTCHAs, to JavaScript, the obstacles continue to multiply.<\/p>\n<p>In this article, I\u2019ll break down the major hurdles you\u2019ll face when scraping data in 2026 and show you how ScrapingBee can help you jump over these barriers without breaking a sweat. Whether you\u2019re dealing with IP blocks, dynamic content, or legal concerns, there\u2019s a solution that doesn\u2019t involve spending weeks building complex infrastructure.<\/p>"},{"title":"What are datacenter proxies?","link":"https:\/\/www.scrapingbee.com\/blog\/datacenter-proxies\/","pubDate":"Mon, 05 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/datacenter-proxies\/","description":"<p>A datacenter proxy is a proxy service that offers quick internet access and a better user experience. As they\u2019re not affiliated with an ISP, they will hide your real IP address, which means the website won\u2019t be able to identify the user\u2019s real IP address, enabling the user to access the website anonymously. That\u2019s beneficial in a number of scenarios, like accessing all the information on a website hosted in a country whose servers may hide certain information, getting around a server block, or when you need high bandwidth without network lag.<\/p>"},{"title":"Best Bing Search Scraper Tools & APIs for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-bing-search-api-alternatives\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-bing-search-api-alternatives\/","description":"<p>The landscape of web data extraction has shifted significantly over the last year. As of August 11, 2025, Microsoft officially retired the standalone Bing Search API, leaving many development teams searching for reliable ways to access search engine result page (SERP) data. In 2026, the standard has moved away from restrictive official endpoints toward specialized\u00a0Bing search APIs\u00a0and scraping tools.<\/p>\n<p>In this article, I'll explore what changed, how modern Bing scrapers function, and most importantly, how to select the\u00a0best Bing search APIs\u00a0for your specific tech stack. Whether you are feeding an AI-driven RAG (Retrieval-Augmented Generation) pipeline or building a high-scale SEO monitoring tool, you will learn the trade-offs between various providers. I will take a detailed look at several industry leaders, with a particular focus on ScrapingBee, a developer-friendly\u00a0<a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a> that serves as a powerful alternative to the legacy official system.<\/p>"},{"title":"How to scrape data from realtor.com","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-realtor\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-realtor\/","description":"<p>Realtor is the second biggest real estate listing website in the US and contains millions of properties. You will be missing out on saving money if you don't do market research on realtor before doing your next property purchase. To make use of the treasure trove of data available on realtor, it is necessary to scrape it. This tutorial will show you exactly how you can do that while bypassing the bot detection used by realtor.com.<\/p>"},{"title":"How to Scrape Google Finance Using Python and ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-finance\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-finance\/","description":"<p>Learning how to scrape Google Finance gives you access to real-time stock prices, company performance metrics, and other financial metrics. However, scraping stock information isn\u2019t always simple, especially on platforms that receive so much traffic. Other issues lie in loading dynamic JavaScript elements, frequent layout changes, and IP restrictions, which make it difficult for automated scrapers to extract consistent data. If you also need to scrape broader Google SERP data, our <a href=\"https:\/\/www.scrapingbee.com\/features\/google\/\" target=\"_blank\" >Google Search Results API<\/a> provides the same reliability for search results extraction as it does for financial pages.<\/p>"},{"title":"How to use a Proxy with Ruby and Faraday","link":"https:\/\/www.scrapingbee.com\/blog\/ruby-faraday-proxy\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/ruby-faraday-proxy\/","description":"<h2 id=\"why-use-faraday\">Why use Faraday?<\/h2>\n<p><a href=\"https:\/\/lostisland.github.io\/faraday\/\" target=\"_blank\" >Faraday<\/a> is a very famous and mature HTTP client library for Ruby. It uses an adapter-based approach which means you can swap out the underlying HTTP requests library without modifying the overarching Faraday code. By default, Faraday uses the <a href=\"https:\/\/ruby-doc.org\/stdlib-3.1.2\/libdoc\/net\/http\/rdoc\/Net\/HTTP.html\" target=\"_blank\" ><code>Net::HTTP<\/code><\/a> adapter but you can switch it out with <a href=\"https:\/\/github.com\/geemus\/excon\" target=\"_blank\" ><code>Excon<\/code><\/a>, <a href=\"https:\/\/github.com\/typhoeus\/typhoeus\" target=\"_blank\" ><code>Typhoeus<\/code><\/a>, <a href=\"http:\/\/toland.github.io\/patron\/\" target=\"_blank\" ><code>Patron<\/code><\/a> or <a href=\"https:\/\/github.com\/igrigorik\/em-http-request\" target=\"_blank\" ><code>EventMachine<\/code><\/a> without modifying more than a line or two of configuration code. This makes Faraday extremely flexible and relatively future-proof.<\/p>"},{"title":"Mastering Web Scraping Machine Learning: Techniques and Best Practices","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-machine-learning\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-machine-learning\/","description":"<p>Machine learning models are only as good as the data they\u2019re trained on, and that\u2019s where things get interesting. While public datasets serve as a starting point, they often lack the granularity, customization, and real-time updates that modern AI applications demand. This is where web scraping for machine learning becomes your secret weapon.<\/p>\n<p>The intersection of web scraping and machine learning opens up endless possibilities for data scientists and developers. Instead of being limited to static datasets, you can collect fresh, domain-specific information directly from the web, whether you\u2019re building sentiment models, price predictors, or recommendation systems. Web scraping fuels intelligent applications.<\/p>"},{"title":"No-code competitor monitoring with ScrapingBee and Integromat","link":"https:\/\/www.scrapingbee.com\/blog\/no-code-competitor-monitoring\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/no-code-competitor-monitoring\/","description":"<p>Competitor analysis is a vital task in big or small companies. It allows you to confirm market needs by looking at what competitors are offering. At the same time, it allows you to build better products and impress potential customers by fixing what is wrong with the current options.<\/p>\n<p>Of course, a company should focus on its own products. But you can\u2019t just ignore what is happening out there. You can find amazing insights with data gathered from competitors, suppliers, customers.<\/p>"},{"title":"Web Scraping Best Practices in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-best-practices\/","pubDate":"Sun, 04 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-best-practices\/","description":"<p><a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Web scraping<\/a> is the automated process of retrieving data from websites and transforming raw HTML or other web data into structured formats for analysis or use. Whether you are working on a small web scraping project or managing large-scale data collection activities, choosing the right web scraping tool and following best practices is essential.<\/p>\n<p>In this article, I'll walk you through the best practices for web scraping. This guide covers everything from choosing the right tools and handling dynamic content to respecting website owners and legal considerations. I also explore how to avoid common pitfalls, such as making too many requests, detecting bot traffic, and improving slow performance. By the end, you will understand how to build successful web scrapers that reliably and ethically provide structured data.<\/p>"},{"title":"5 Best Price Monitoring Tools in 2026","link":"https:\/\/www.scrapingbee.com\/blog\/price-monitoring-tool\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/price-monitoring-tool\/","description":"<p>Imagine having a secret weapon that gives you X-ray vision into your competitors\u2019 pricing strategies. That\u2019s exactly what price monitoring tools do for businesses like yours.<\/p>\n<p>These nifty solutions are your eyes and ears in the market, helping you stay one step ahead of the competition. They\u2019re like a team of pricing experts working 24\/7, giving you real-time insights into price changes, stock levels, and market trends.<\/p>\n<p>With these price trackers in your arsenal, you can make smart, data-driven decisions that boost your profits and keep customers coming back. And if you\u2019re looking for a tool that does it all, ScrapingBee is the Swiss Army knife of price monitoring. It seamlessly integrates with your existing systems and automates the tedious stuff, so you can focus on growing your business. But if you want to look at all the options first, keep reading!<\/p>"},{"title":"Best YouTube Scrapers for 2026","link":"https:\/\/www.scrapingbee.com\/blog\/best-youtube-scraper\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-youtube-scraper\/","description":"<p>YouTube has solidified its position not just as a video hosting site, but as the world's most critical repository of human knowledge, cultural trends, and consumer sentiment. Whether you are training a Large Language Model (LLM), monitoring competitors, or analyzing the &quot;creator economy,&quot; the data found on YouTube is gold.<\/p>\n<p>However, the &quot;gold&quot; is locked behind some of the most sophisticated anti-bot systems on the planet. Gone are the days when a simple Python requests script could fetch a page. Today, you need to navigate headless browsers, rotating residential proxies, and dynamic JavaScript rendering that can change its DOM structure in the blink of an eye.<\/p>"},{"title":"Effortless Guide to Scraping JavaScript Rendered Web Pages with Python","link":"https:\/\/www.scrapingbee.com\/blog\/scraping-javascript-rendered-web-pages\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/scraping-javascript-rendered-web-pages\/","description":"<p>Let\u2019s talk about one of the trickiest challenges in the Python web scraping world: scraping JavaScript-rendered web pages. They\u2019re nothing like those ancient static HTML pages. This modern web twist ensures that the content is displayed dynamically, long after the initial page has loaded. This means the data you want might not be present in the raw HTML returned by a simple HTTP request.<\/p>\n<p>But don\u2019t worry, this is where dynamic content scraping comes into play. We need tools that can roll up their sleeves, execute JavaScript, and patiently wait for the page to fully render before we grab the data.<\/p>"},{"title":"Haskell Web Scraping","link":"https:\/\/www.scrapingbee.com\/blog\/haskell-web-scraping\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/haskell-web-scraping\/","description":"<p>Even though <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> is commonly done with languages like Python and JavaScript, a statically typed functional programming language like Haskell can provide extra benefits. Types make sure that your scripts do what you want them to do and that the data scraped conforms to your requirements.<\/p>\n<p>In this article, you'll learn how to do web scraping in Haskell with libraries such as <a href=\"https:\/\/hackage.haskell.org\/package\/scalpel\" target=\"_blank\" >Scalpel<\/a> and <a href=\"https:\/\/hackage.haskell.org\/package\/webdriver\" target=\"_blank\" >webdriver<\/a>.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAAAxElEQVR4nGL5\/\/8\/A7mAiWydKJr\/\/\/\/\/9&#43;8\/XA75&#43;\/cvPs1\/\/\/w7f&#43;LB\/Zuv\/\/75&#43;\/fPP2Rt379\/v3Pv\/sNHT379&#43;oWsmQXBYmUWlxZ4ePvNn99\/P3\/&#43;rm8qz8LKzMDA8P79h8PHTn75&#43;vXv3386WhomRvrYnf3u9VdOLlYmJiZBYW5GJkaIuIiIsKyM1K9fv\/h4eXS0NLDb\/PfPPyERTn4BLg4utp8\/fjMzI8wVFhLydHP&#43;\/PkLmp8ZB0FUkQEAAQAA\/\/&#43;Gw1ffccbf7QAAAABJRU5ErkJggg==); background-size: cover\">\n <svg width=\"1200\" height=\"628\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/haskell-web-scraping\/cover_hu3669966589352912998.png 1200w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/haskell-web-scraping\/cover_hu3669966589352912998.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/haskell-web-scraping\/cover_hu3669966589352912998.png 1200w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/haskell-web-scraping\/cover.png\"\n width=\"1200\" height=\"628\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<h2 id=\"basic-scraping\">Basic Scraping<\/h2>\n<p>Scraping a static website can be done with any language that has libraries for an HTTP client and HTML parsing. Haskell is no different. It even has a dedicated high-level scraping library called <a href=\"https:\/\/hackage.haskell.org\/package\/scalpel\" target=\"_blank\" >Scalpel<\/a>, which puts it above similar languages like Rust.<\/p>"},{"title":"How to Scrape Pinterest: Full Tutorial with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-pinterest\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-pinterest\/","description":"<p>In this tutorial, I\u2019ll show you how to scrape Pinterest using ScrapingBee\u2019s API. Whether you want to scrape Pinterest data for trending images, individual pins, Pinterest profiles, or entire boards, this guide explains how to build a <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraper<\/a> that works.<\/p>\n<p>Scraping Pinterest can be tough. Its anti-bot protection often trips up typical web scrapers. That's why I prefer using ScrapingBee. With this tool, you won't need to run a headless browser or wait for page elements to load manually. You just plug in your API key, decide what data to collect, and extract Pinterest data with ease.<\/p>"},{"title":"How to Scrape With Camoufox to Bypass Antibot Technology","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-with-camoufox-to-bypass-antibot-technology\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-with-camoufox-to-bypass-antibot-technology\/","description":"<p>In a previous blog, <a href=\"https:\/\/www.scrapingbee.com\/blog\/creepjs-browser-fingerprinting\/\" target=\"_blank\" >we evaluated popular browser automation frameworks and patches developed for them to bypass CreepJS<\/a>, which is a browser fingerprinting tool that can detect headless browsers and stealth plugins. Of all the tools we tried, we found that <a href=\"https:\/\/camoufox.com\/\" target=\"_blank\" >Camoufox<\/a> scored the best, being indistinguishable from a real, human-operated browser. In this blog, we\u2019ll see what it is, how it works, and try using it for some <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> tasks.<\/p>"},{"title":"How to Set Up a Proxy Server with Apache","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-set-up-a-proxy-server-with-apache\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-set-up-a-proxy-server-with-apache\/","description":"<p>A proxy server is an intermediate server between a client and another server. The client sends the requests to the proxy server, which then passes them to the destination server. The destination server sends the response to the proxy server, and it forwards this to the client.<\/p>\n<p>In the world of web scraping, using a proxy server is common for the following reasons:<\/p>\n<ul>\n<li><strong>Privacy:<\/strong> A proxy server hides the IP address of the scraper, providing a layer of privacy.<\/li>\n<li><strong>Avoiding IP bans:<\/strong> A proxy server can be used to circumvent IP bans. If the target website blocks the IP address of the proxy server, you can simply use a different proxy server.<\/li>\n<li><strong>Circumventing geoblocking:<\/strong> By connecting to a proxy server situated in a certain region, you can circumvent geoblocking. For instance, if your content is available only in the US, you can connect to a proxy server in the US and scrape as much as you want to.<\/li>\n<\/ul>\n<p>In this article, you'll learn how to set up your own proxy server and use it to scrape websites. There are many ways to create a DIY proxy server, such as using <a href=\"https:\/\/httpd.apache.org\/\" target=\"_blank\" >Apache<\/a> or <a href=\"https:\/\/www.nginx.com\/\" target=\"_blank\" >Nginx<\/a> as proxy servers or using dedicated proxy tools like <a href=\"https:\/\/www.squid-cache.org\/\" target=\"_blank\" >Squid<\/a>. In this article, you'll use Apache.<\/p>"},{"title":"How to set up Axios proxy: A step-by-step guide for Node.js","link":"https:\/\/www.scrapingbee.com\/blog\/nodejs-axios-proxy\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/nodejs-axios-proxy\/","description":"<p>If you've ever tried to send requests through a proxy in Node.js, chances are you've searched for <strong>how to set up an Axios proxy<\/strong>. Whether you're scraping the web, checking geo-restricted content, or just hiding your real IP, proxies are a common part of the toolkit.<\/p>\n<p>This guide walks through the essentials of using Axios with proxies:<\/p>\n<ul>\n<li>setting up a basic proxy,<\/li>\n<li>adding username\/password authentication,<\/li>\n<li>rotating proxies to avoid bans,<\/li>\n<li>working with SOCKS5,<\/li>\n<li>plus a few fixes for common errors.<\/li>\n<\/ul>\n<p>We'll also cover where a service like ScrapingBee can save you time if you don't want to manage proxies yourself.<\/p>"},{"title":"Is Web Scraping Legal? Key Insights and Guidelines You Need to Know","link":"https:\/\/www.scrapingbee.com\/blog\/is-web-scraping-legal\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/is-web-scraping-legal\/","description":"<p>Web scraping raises a lot of questions, but \u201cis web scraping legal\u201d is the one I hear the most. The legality of web scraping depends on three critical factors: what data you\u2019re collecting, how you\u2019re collecting it, and where you\u2019re operating. Think of it like driving a car, the act itself isn\u2019t illegal, but speeding, running red lights, or driving without a license can land you in serious trouble.<\/p>\n<p>This guide breaks down the complex world of web scraping legality across different jurisdictions. We\u2019ll explore key laws including privacy regulations, copyright protections, terms of service agreements, and anti-hacking statutes. You\u2019ll also discover ethical best practices that keep your data collection projects on the right side of the law.<\/p>"},{"title":"Search Engine Scraping Tutorial With ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/search-engine-scraping\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/search-engine-scraping\/","description":"<p>Search engine scraping has become an essential method for many businesses, digital marketers, and researchers to gather information. It is an excellent data extraction method when you need to analyze a large number of competitor websites. With web scraping, you can extract information on market trends and make informed decisions on pricing strategies using the data extracted from SERPs.<\/p>\n<p>In this tutorial, I\u2019ll show you how to perform search engine scraping safely and efficiently using ScrapingBee\u2019s web data extraction tool. You\u2019ll learn how to extract structured data from major search engines like Google and Bing without worrying about getting blocked, managing proxies, or dealing with CAPTCHAs. Let's dive in!<\/p>"},{"title":"Send stock prices update to Slack with Make and ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/no-code-stock-price-slack\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/no-code-stock-price-slack\/","description":"<p>It is unlikely that you will always be on top of your investments if you do not study your stock's price movements. The good news is that there are plenty of online resources available to you that allow you to monitor the financial health of a company whose shares you own, and to evaluate the stock's performance.<\/p>\n<p><a href=\"https:\/\/finance.yahoo.com\/\" target=\"_blank\" >Yahoo Finance<\/a> supplies an up-to-date news feed of financial news from some of the most trusted sources online, as well as offering a comprehensive look at stocks and funds.<\/p>"},{"title":"urllib3 vs. Requests: Which HTTP Client is Best for Python?","link":"https:\/\/www.scrapingbee.com\/blog\/urllib3-vs-requests\/","pubDate":"Sat, 03 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/urllib3-vs-requests\/","description":"<p>Python is one of the most widely used programming languages for web scraping, and a large chunk of any web scraping task is sending HTTP requests. urllib3 and Requests are the most commonly used packages for this purpose. Naturally, the next question is which one do you use?<\/p>\n<p>In this blog, we briefly introduce both packages, highlighting the differences between urllib3 and Requests, and discuss which one of them is best suited for different scenarios.<\/p>"},{"title":"A Web Scraper\u2019s Guide to Robots.txt","link":"https:\/\/www.scrapingbee.com\/blog\/robots-txt-web-scraping\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/robots-txt-web-scraping\/","description":"<p>Everything has rules, and the main rulebook for <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping<\/a> is the robots.txt file. Think of it as the foundation for how web crawlers and scrapers interact with websites. It guides you through the ethical intricacies of automated data extraction and specifies your responsibilities.<\/p>\n<p>In this guide, I\u2019ll walk you through everything you need to know about robots.txt. You'll learn about its purpose, syntax, and why compliance matters. I'll also explain how tools like ScrapingBee can help you stay on the right side of the web scraping game.<\/p>"},{"title":"How to Scrape Baidu: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-baidu\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-baidu\/","description":"<p>Want to learn how to scrape Baidu? As China\u2019s largest search engine, Baidu is an attractive target for web scraping because it is similar to Google in function but tailored for local regulations. For those wanting to tap into China's digital ecosystem, it is the best source of public data that displays relevant, location-based search trends, plus everything you need to conduct market research.<\/p>\n<p>This guide will teach you how to extract information from Baidu HTML code with the most beginner-friendly solution \u2013 our <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >Scraping API<\/a> and Python SDK. Dynamically loaded pages load structure data with the help of JavaScript scripts, while rate-limiting and bot detection tools try to prevent the automated data parsing on the platform.<\/p>"},{"title":"How to Scrape Glassdoor: Job Titles, Salaries, and Company Ratings","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-glassdoor\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-glassdoor\/","description":"<p>Trying to learn how to scrape Glassdoor data? You're at the right place. In this guide, I\u2019ll show you exactly how to extract job title descriptions, salaries, and company information using ScrapingBee\u2019s powerful API.<\/p>\n<p>You may already know this \u2013 Glassdoor is a goldmine of information, but scraping it can be a challenging task. The site utilizes dynamic content loading and sophisticated bot protection. As a result, the Glassdoor website is out of reach for an average web scraper. I\u2019ve spent countless hours battling these defenses with custom solutions with no luck.<\/p>"},{"title":"Mastering AWS Web Scraping: Your Guide to Efficient Data Collection","link":"https:\/\/www.scrapingbee.com\/blog\/aws-web-scraping\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/aws-web-scraping\/","description":"<p>If you're diving into AWS web scraping, you probably already know it can get complicated fast. Managing proxies, handling CAPTCHAs, and rendering JavaScript-heavy pages on your own AWS infrastructure is no small feat.<\/p>\n<p>That's where ScrapingBee comes in, a reliable, efficient alternative to juggling complex, self-managed AWS scraping stacks.<\/p>\n<p>In this guide, I will teach you how to automate and scale your scraping projects using AWS Lambda web scraping combined with <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee\u2019s API<\/a>, making your life easier and your scrapers more robust.<\/p>"},{"title":"Python Web Scraping Stock Price With ScrapingBee","link":"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-stock-price\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-stock-price\/","description":"<p>Python web scraping stock price techniques have become essential for traders and financial analysts who need near real-time market data analysis without paying thousands for premium API access.<\/p>\n<p>Becoming a pro at scraping stock market data allows you to build a personal investment dashboard for real time stock data monitoring. It also helps you to extract data for market research, or developing a trading algorithm. Whatever you decide to use all the data for, having direct access to stock prices gives you an edge.<\/p>"},{"title":"The 6 Best mobile and 4G proxy providers for web scraping","link":"https:\/\/www.scrapingbee.com\/blog\/best-mobile-4g-proxy-provider-webscraping\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/best-mobile-4g-proxy-provider-webscraping\/","description":"<p>In this article, we will look at the six best mobile and 4G proxy providers for web scraping. We will not only look at the different features they offer but also perform a real-world test that includes the performance, speed, and success and error rate on some of the most popular websites: Instagram, Google, <a href=\"https:\/\/www.scrapingbee.com\/features\/amazon\/\" target=\"_blank\" >Amazon<\/a> and the top 1,000 Alexa rank (the list of the most visited domains in the world).<\/p>"},{"title":"Topic Analysis of US State Subreddits Using gpt-4o-mini","link":"https:\/\/www.scrapingbee.com\/blog\/topic-analysis-of-us-state-subreddits-using-ai\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/topic-analysis-of-us-state-subreddits-using-ai\/","description":"<p>Ever wondered what people across the United States are talking about online? Reddit, often dubbed &quot;the front page of the internet,&quot; offers a treasure trove of conversations, and each state has its own dedicated subreddit reflecting local interests. But what exactly are these state-based communities discussing the most?<\/p>\n<p>In total, we looked at 50,947 threads from the different states of the USA. We used the \u201cyear\u201d filter and the \u201ctop\u201d sort on Reddit. We first made a word cloud consisting of the commonly occurring words in the thread topics. Based on this preliminary analysis, we made 8 categories, including an \u201cothers\u201d category which we excluded from visualizations. We asked gpt-4o-mini to go over each topic and classify them into one of those. The 8 categories we used are as follows:<\/p>"},{"title":"Using cURL with a proxy","link":"https:\/\/www.scrapingbee.com\/blog\/curl-proxy\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/curl-proxy\/","description":"<p>If you've ever needed to route your requests through another server, using <strong>cURL with proxy<\/strong> is one of the easiest ways to do it. A proxy sits between you and the destination, forwarding your requests and sending the responses back like a chill middle-man that doesn't ask questions.<\/p>\n<p>Sometimes you need this because a service shows different data depending on where you appear to be coming from: geo-restricted content, prices shown in the &quot;wrong&quot; currency, or straight-up blocked regions. Hitting the site directly won't cut it, but sending the same request through a proxy in the right location gets you exactly the data you need.<\/p>"},{"title":"What is a Headless Browser: Top 8 Options for 2026 [Pros vs. Cons]","link":"https:\/\/www.scrapingbee.com\/blog\/what-is-a-headless-browser-best-solutions-for-web-scraping-at-scale\/","pubDate":"Fri, 02 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/what-is-a-headless-browser-best-solutions-for-web-scraping-at-scale\/","description":"<p>Imagine a world where web browsers work tirelessly behind the scenes, navigating websites, filling forms, and capturing data without ever showing a single pixel on a screen. I welcome you to the realm of headless browsers - the unsung heroes of web automation and testing!<\/p>\n<p>In today's digital landscape, where web applications grow increasingly complex and data-driven decision-making reigns supreme, headless browsers have emerged as indispensable tools for developers, quality assurance (QA) engineers, and data enthusiasts alike. They're the Swiss Army knives of the web, capable of slicing through mundane tasks, carving out efficiencies, and sculpting robust testing environments.<\/p>"},{"title":"Best Screen Scraper Tools for Data Extraction","link":"https:\/\/www.scrapingbee.com\/blog\/screen-scrapers\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/screen-scrapers\/","description":"<p>Can't get your hands on an API, so you're looking for the best screen scraper tools for data extraction? Screen scrapers are a great option when you need to capture the information you see on a webpage. Think of it as taking a snapshot of the data your browser renders, but automated and at scale.<\/p>\n<p>Reliable screen scraper tools automate the tasks, handling everything from proxy rotation to JavaScript rendering so you don\u2019t have to sweat configuring the technical details.<\/p>"},{"title":"How to Scrape Bing with ScrapingBee: Step-by-Step Guide","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-bing\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-bing\/","description":"<p>Learning how to scrape Bing search results can feel like navigating a minefield of anti-bot measures and IP blocks. Microsoft's Bing search engine has sophisticated protection systems to detect traditional scraping attempts faster than you can debug your first request failure.<\/p>\n<p>That\u2019s exactly why I use ScrapingBee. Instead of wrestling with proxy rotations, JavaScript rendering, and constantly changing anti-bot methods, this web scraper handles all the complexity. It allows you to scrape search results data without any technical issues.<\/p>"},{"title":"How to Scrape Booking.com: Step-by-Step Tutorial","link":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-booking-com\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-booking-com\/","description":"<p>Booking.com is one of the biggest travel platforms, and a go-to choice for millions of users planning their trips and vacations. By accessing the platform using automated tools, we can collect hotel data, including names, ratings, prices, and locations, for research or comparison purposes.<\/p>\n<p>However, the platform\u2019s strict anti-bot systems make direct extractions nearly impossible. Fortunately, our API and implementation of Python tools eliminate these challenges by providing automatic JavaScript execution, proxy rotation, and CAPTCHA-resistant browsing.<\/p>"},{"title":"How to Web Scrape Walmart.com","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-walmart\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-walmart\/","description":"<h2 id=\"introduction\">Introduction<\/h2>\n<p>In this article, you will learn how to <a href=\"https:\/\/www.scrapingbee.com\/features\/walmart\/\" target=\"_blank\" >scrape product information from Walmart<\/a>, the world's largest company by revenue (US $570 billion), and the world's largest private employer with 2.2 million employees.<\/p>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAIAAADwazoUAAAAq0lEQVR4nGL5\/\/8\/A7mAiWydVNf8&#43;x0IYYAf335&#43;\/\/rz3pMX377\/hAuyIKv4\/&#43;EMw5&#43;PYGF&#43;RgETuPij&#43;2&#43;evPj6j&#43;HfkddXvbR19JQVsGhmYOFhYOYCMRhRXPTh\/bfPX3\/&#43;&#43;8&#43;gxyKrLS&#43;L3WaGP1\/gNiMLs7Gz\/Pnz7x8DgxA\/DzMLM1ycET2qIB5mFUIR&#43;\/33\/buv\/\/79FxDg5OBkw62ZFDBw8QwIAAD\/\/6KPQ\/fc4CkrAAAAAElFTkSuQmCC); background-size: cover\">\n <svg width=\"460\" height=\"250\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/web-scraping-walmart\/cover.png 460 '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-walmart\/cover.png\"\n width=\"460\" height=\"250\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/web-scraping-walmart\/cover.png 460'\n src=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-walmart\/cover.png\"\n width=\"460\" height=\"250\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<p>You might want to scrape the product pages on Walmart for monitoring stock levels for a particular item or for monitoring product prices. This can be useful when a product is sold out on the website and you want to make sure you are notified as soon as the stock is replenished.<br><br>In this article you will learn:<\/p>"},{"title":"Puppeteer Stealth Tutorial; How to Set Up & Use (+ Working Alternatives)","link":"https:\/\/www.scrapingbee.com\/blog\/puppeteer-stealth-tutorial-with-examples\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/puppeteer-stealth-tutorial-with-examples\/","description":"<p>Puppeteer is a robust headless browser library created mainly to automate user interactions. However, it can be easily detected and blocked by anti-scraping measures due to its lack of built-in stealth capabilities. This is where Puppeteer Extra comes in, offering plugins like Stealth to address this limitation.<\/p>\n<p>This tutorial will explore how to utilize Puppeteer Stealth to attempt to evade detection while scraping websites effectively. We also cover solutions and alternatives for by-passing the latest cutting edge anti-bot tech which Puppeteer Stealth sometimes struggles to evade.<\/p>"},{"title":"Shades of Success: The Trending E-commerce Colours of 2026","link":"https:\/\/www.scrapingbee.com\/blog\/shades-of-success-e-commerce-trending-colours\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/shades-of-success-e-commerce-trending-colours\/","description":"<div class=\"img\" style=\"background: url(data:image\/jpeg;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAKCAIAAAA7N&#43;mxAAABTklEQVR4nJyQT05bMRCHf&#43;Pxn5e&#43;NmnVKlGlNq3aVaUegBPAHdhzB07AHdhwCU6BxJ4FS3YsQvLyFF6en2eQkyAlKNnghT2y5\/PMfFZvfuG9y&#43;SNVidhE9PWO21SDsCEttXJVCQBjlQgAljKDGNey9NMcuLrTQ6Y1gUsmO4f4tV1\/f0bl6WpKkmio6\/8\/4&#43;7vWuDozaqs2QtCk&#43;LRkXQJT0&#43;6v0dO4ukP4Z8elJ6R1Uty6iAGsLHHv37bX8ObbWQSSVsUHjjHRmDZauD0kCVsjDCupPUpPNLLGLpvYudGNIYk7WcUud8KEx1cSZcMBQQhYC2bSswnWPeFELCAlVthYKj2MXYYTiQz592bFrsqv3Sp37ZKcRAQaRKzIhdHjX41feH4FV1ZY5182H2XGj7mCQLG49sCPSG3AdnnoJtBr2EwJKbh&#43;U9JICXAAAA\/\/\/2EZVHGzcKpAAAAABJRU5ErkJggg==); background-size: cover\">\n <svg width=\"2309\" height=\"1157\" aria-hidden=\"true\" style=\"background-color:white\"><\/svg>\n <img\n class=\"lazyload\"\n data-sizes=\"auto\"\n data-srcset=', \/blog\/shades-of-success-e-commerce-trending-colours\/cover_hu372721771777939466.png 1500w '\n data-src=\"https:\/\/www.scrapingbee.com\/blog\/shades-of-success-e-commerce-trending-colours\/cover_hu372721771777939466.png\"\n width=\"2309\" height=\"1157\"\n alt='cover image'>\n <noscript>\n <img\n loading=\"lazy\"\n \n srcset=', \/blog\/shades-of-success-e-commerce-trending-colours\/cover_hu372721771777939466.png 1500w'\n src=\"https:\/\/www.scrapingbee.com\/blog\/shades-of-success-e-commerce-trending-colours\/cover.png\"\n width=\"2309\" height=\"1157\"\n alt='cover image'>\n <\/noscript>\n<\/div>\n\n<br>\n\n<p>As consumers, we love nothing more than jumping aboard a new micro trend\nor aesthetic, and platforms such as Pinterest and TikTok have made it\neasier than ever before to keep up with all the latest trends.<\/p>\n<p>Colour is at the heart of every fashion, interior and style trend, but\nin the fast-paced world of 2026, colour is so much more than pastel\ntones and monochrome palettes. It's no surprise that the likes of Dulux\nand Pantone release an annual 'colour of the year'.<\/p>"},{"title":"The Best Techniques for Effective Regex Scraping in Web Development","link":"https:\/\/www.scrapingbee.com\/blog\/regex-scraping\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/regex-scraping\/","description":"<p>Web scraping with Regular Expressions (regex) is a powerful technique that lets you extract specific patterns of text from web pages. Regex enables pattern-based text extraction, allowing you to pinpoint exactly what you want from the often messy HTML code behind websites. While regex scraping can be incredibly precise for targeted tasks, it\u2019s important to understand its limitations and how it stacks up against more automated solutions like ScrapingBee's <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >web scraping API<\/a>.<\/p>"},{"title":"Web Scraping without getting blocked (2026 Solutions)","link":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-without-getting-blocked\/","pubDate":"Thu, 01 Jan 2026 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/blog\/web-scraping-without-getting-blocked\/","description":"<p><strong>Web scraping<\/strong>, or <strong>crawling<\/strong>, is the process of fetching data from a third-party website by downloading and parsing the HTML code to extract the data you need.<\/p>\n<blockquote>\n<p><em>&quot;But why don't you use the API for this?&quot;<\/em><\/p>\n<\/blockquote>\n<p>Not every website offers an API, and those that do might not expose all the information you need. Therefore, scraping often becomes the only viable solution to extract website data.<\/p>\n<p>There are numerous use cases for web scraping:<\/p>"},{"title":"Can I use XPath selectors in DOM Crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/can-i-use-xpath-selectors-in-dom-crawler\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/can-i-use-xpath-selectors-in-dom-crawler\/","description":"<p>Yes, you can use XPath selectors in <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a>. Here is some sample code that uses <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/overview.html\" target=\"_blank\" >Guzzle<\/a> to load the <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee website<\/a> and then uses DOM Crawler's <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filterXPath<\/code> method<\/a> to extract and print the text content of the <code>h1<\/code> tag:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Create a client to make the HTTP request\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">\\GuzzleHttp\\Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/www.scrapingbee.com\/&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> (<span style=\"color:#a6e22e\">string<\/span>) $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find the first h1 element on the page\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$h1 <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filterXPath<\/span>(<span style=\"color:#e6db74\">&#39;\/\/h1[1]&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Get the text content of the h1 element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$text <span style=\"color:#f92672\">=<\/span> $h1<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">text<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Print the text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">echo<\/span> $text; \n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output: \n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;Tired of getting blocked while scraping the web?&#34;\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>If you do not want to use Guzzle, take a look at this sample code that directly passes in an HTML string:<\/p>"},{"title":"Does Guzzle use cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/does-guzzle-use-curl\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/does-guzzle-use-curl\/","description":"<p>Yes, <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/index.html\" target=\"_blank\" >Guzzle<\/a> uses cURL as one of the underlying HTTP transport adapters. However, Guzzle supports multiple adapters, including cURL, PHP stream, and sockets, which can be used interchangeably depending on your needs. By default, Guzzle uses cURL as the preferred adapter, as it provides a robust and feature-rich API for sending HTTP requests and handling responses. However, Guzzle also provides an abstraction layer that allows developers to switch between adapters seamlessly, without having to modify their application code.<\/p>"},{"title":"Handle Guzzle exception and get HTTP body?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/handle-guzzle-exception-and-get-http-body\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/handle-guzzle-exception-and-get-http-body\/","description":"<p>You can easily handle <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/quickstart.html#exceptions\" target=\"_blank\" >Guzzle exceptions<\/a> and get the HTTP body of the response (if it has any) by catching <code>RequestException<\/code>. This is a higher-level exception that covers <code>BadResponseException<\/code>, <code>TooManyRedirectsException<\/code>, and a few related exceptions.<\/p>\n<p>Here is how the exceptions in Guzzle depend on each other:<\/p>\n<pre tabindex=\"0\"><code>. \\RuntimeException\n\u2514\u2500\u2500 TransferException (implements GuzzleException)\n \u251c\u2500\u2500 ConnectException (implements NetworkExceptionInterface)\n \u2514\u2500\u2500 RequestException\n \u251c\u2500\u2500 BadResponseException\n \u2502 \u251c\u2500\u2500 ServerException\n \u2502 \u2514\u2500\u2500 ClientException\n \u2514\u2500\u2500 TooManyRedirectsException\n<\/code><\/pre><p>Here is an example of how to handle the <code>RequestException<\/code> in Guzzle and get the HTTP body (if there is one):<\/p>"},{"title":"How do I do HTTP basic authentication with Guzzle?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-do-i-do-http-basic-authentication-with-guzzle\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-do-i-do-http-basic-authentication-with-guzzle\/","description":"<p>You can easily do HTTP basic authentication with <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/index.html\" target=\"_blank\" >Guzzle<\/a> by passing in an <code>auth<\/code> array with the username and password as part of the options while creating the <code>Client<\/code> object. Guzzle will make sure to use these authentication credentials with all the follow-up requests made by the <code>$client<\/code>.<\/p>\n<p>Here is some sample code that uses an authentication endpoint at <a href=\"https:\/\/httpbin.org\/\" target=\"_blank\" >HTTP Bin<\/a> to demonstrate this:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Client<\/span>([\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;auth&#39;<\/span> <span style=\"color:#f92672\">=&gt;<\/span> [<span style=\"color:#e6db74\">&#39;user&#39;<\/span>, <span style=\"color:#e6db74\">&#39;passwd&#39;<\/span>]\n<\/span><\/span><span style=\"display:flex;\"><span>]);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/httpbin.org\/basic-auth\/user\/passwd&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>$body <span style=\"color:#f92672\">=<\/span> $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">echo<\/span> $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getStatusCode<\/span>() <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">echo<\/span> $body;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ 200\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ {\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;authenticated&#34;: true,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;user&#34;: &#34;user&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ }\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Alternatively, you can specify the <code>auth<\/code> credentials on a per-request basis as well:<\/p>"},{"title":"How do you handle client error in Guzzle?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-do-you-handle-client-error-in-guzzle\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-do-you-handle-client-error-in-guzzle\/","description":"<p>You can easily handle client errors in <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/index.html\" target=\"_blank\" >Guzzle<\/a> by catching the thrown <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/quickstart.html#exceptions\" target=\"_blank\" >exceptions<\/a>. You can either catch the <code>RequestException<\/code> and it should cover most of the exceptions or you can catch the more specific <code>ClientException<\/code> which covers only the client exceptions such as 4xx status codes.<\/p>\n<p>Here is an example of some code that results in a <code>404 Not Found<\/code> exception that is handled by catching <code>ClientException<\/code> in Guzzle:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Exception\\ClientException<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">try<\/span> {\n<\/span><\/span><span style=\"display:flex;\"><span> $response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/httpbin.org\/status\/404&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Process response normally...\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>} <span style=\"color:#66d9ef\">catch<\/span> (<span style=\"color:#a6e22e\">ClientException<\/span> $e) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ An exception was raised but there is an HTTP response body\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#75715e\">\/\/ with the exception (in case of 404 and similar errors)\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> $response <span style=\"color:#f92672\">=<\/span> $e<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getResponse<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span> $responseBodyAsString <span style=\"color:#f92672\">=<\/span> $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>()<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getContents<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getStatusCode<\/span>() <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $responseBodyAsString;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ 404\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>You can read more about various exceptions thrown by Guzzle in the <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/quickstart.html#exceptions\" target=\"_blank\" >official docs<\/a>.<\/p>"},{"title":"How to find all links using DOM Crawler and PHP?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-all-links-using-dom-crawler-and-php\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-all-links-using-dom-crawler-and-php\/","description":"<p>You can find all links using <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> and PHP by making use of either the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filter<\/code> or the <code>filterXPath<\/code> method<\/a>. Below, you can find two code samples that demonstrate how to use either of these methods. The code uses <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/overview.html\" target=\"_blank\" >Guzzle<\/a> to load the <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee website<\/a> so you may want to install that as well using Composer.<\/p>\n<p>This example code uses the <code>filter<\/code> method:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Create a client to make the HTTP request\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">\\GuzzleHttp\\Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/www.scrapingbee.com\/&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> (<span style=\"color:#a6e22e\">string<\/span>) $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all links on the page\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$links <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;a&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the links and print their href attributes\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($links <span style=\"color:#66d9ef\">as<\/span> $link) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $link<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getAttribute<\/span>(<span style=\"color:#e6db74\">&#39;href&#39;<\/span>) <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ \/\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ https:\/\/dashboard.scrapingbee.com\/account\/login\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ https:\/\/dashboard.scrapingbee.com\/account\/register\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ \/#pricing\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ \/#faq\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ \/blog\/\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ #\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ \/features\/screenshot\/\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ \/features\/google\/\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ ...\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>This example code uses <code>filterXPath<\/code> method:<\/p>"},{"title":"How to find elements without specific attributes in DOM Crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-elements-without-specific-attributes-in-dom-crawler\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-elements-without-specific-attributes-in-dom-crawler\/","description":"<p>You have two options to find elements without specific attributes in <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a>. The first option uses the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filterXPath<\/code> method<\/a> with an XPath selector that includes a negative predicate. And the second option uses the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filter<\/code> method<\/a> with the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/CSS\/:not\" target=\"_blank\" ><code>:not<\/code> CSS pseudo-class<\/a> and the attribute selector.<\/p>\n<p>Here is some sample code that showcases the <code>filterXPath<\/code> options and finds all <code>img<\/code> tags that do not have an <code>alt<\/code> attribute:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&lt;&lt;&lt;<\/span><span style=\"color:#e6db74\">EOD<\/span><span style=\"color:#e6db74\">\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;!DOCTYPE html&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;html&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;head&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\t&lt;title&gt;Example Page&lt;\/title&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;\/head&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;body&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\t&lt;h1&gt;Hello, world!&lt;\/h1&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\t&lt;p&gt;This is an example page.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\t&lt;img src=&#34;logo.png&#34; \/&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;img src=&#34;header.png&#34; alt=&#34;header&#34;\/&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;img src=&#34;yasoob.png&#34; alt=&#34;Photo of Yasoob&#34;\/&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;\/body&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;\/html&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"><\/span><span style=\"color:#e6db74\">EOD<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all img elements without an alt attribute\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$imagesWithoutAlt <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filterXPath<\/span>(<span style=\"color:#e6db74\">&#39;\/\/img[not(@alt)]&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the images and print their src attributes\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($imagesWithoutAlt <span style=\"color:#66d9ef\">as<\/span> $image) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $image<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getAttribute<\/span>(<span style=\"color:#e6db74\">&#39;src&#39;<\/span>) <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ logo.png\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>Here is some sample code that uses the <code>filter<\/code> method with <code>:not<\/code> CSS pseudo-class instead:<\/p>"},{"title":"How to find HTML elements by attribute using DOM Crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-html-elements-by-attribute-using-dom-crawler\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-html-elements-by-attribute-using-dom-crawler\/","description":"<p>You can find HTML elements by attribute using <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> by utilizing the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filterXPath<\/code> method<\/a> with an XPath selector that includes an attribute selector. Here's an example that uses the <code>filterXPath<\/code> method with an XPath selector to find all <code>input<\/code> elements on <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/login\" target=\"_blank\" >ScrapingBee's login page<\/a> that have a <code>type<\/code> attribute equal to <code>&quot;email&quot;<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Create a client to make the HTTP request\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">\\GuzzleHttp\\Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/dashboard.scrapingbee.com\/account\/login&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> (<span style=\"color:#a6e22e\">string<\/span>) $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all input elements with a type attribute equal to &#34;email&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$textInputs <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filterXPath<\/span>(<span style=\"color:#e6db74\">&#39;\/\/input[@type=&#34;email&#34;]&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the inputs and print their values\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($textInputs <span style=\"color:#66d9ef\">as<\/span> $input) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $input<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getAttribute<\/span>(<span style=\"color:#e6db74\">&#39;placeholder&#39;<\/span>) <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Enter your email\n<\/span><\/span><\/span><\/code><\/pre><\/div><p><strong>Note:<\/strong> This example uses Guzzle so you may have to install it.<\/p>"},{"title":"How to find HTML elements by class with DOM Crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-html-elements-by-class-with-dom-crawler\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-html-elements-by-class-with-dom-crawler\/","description":"<p>You can find HTML elements by class with <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> by making use of the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filter<\/code> method<\/a> with a [CSS selector](<a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Glossary\/CSS_Selector\" target=\"_blank\" >CSS selectors<\/a>) that includes the class name. Here is some sample code that uses <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/overview.html\" target=\"_blank\" >Guzzle<\/a> to load <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee's homepage<\/a> and then uses the <code>filter<\/code> method to extract the tag with the class of <code>mb-[33px]<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Create a client to make the HTTP request\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">\\GuzzleHttp\\Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/scrapingbee.com\/&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> (<span style=\"color:#a6e22e\">string<\/span>) $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all elements with the class &#34;mb-[33px]&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$h1Tag <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;.mb-[33px]&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the elements and print their text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($h1Tag <span style=\"color:#66d9ef\">as<\/span> $element) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $element<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">textContent<\/span> <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Tired of getting blocked while scraping the web?\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Try ScrapingBee for Free\n<\/span><\/span><\/span><\/code><\/pre><\/div><p><strong>Note:<\/strong> This example uses Guzzle so you may have to install it.<\/p>"},{"title":"How to find HTML elements by multiple tags with DOM Crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-html-elements-by-multiple-tags-with-dom-crawler\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-html-elements-by-multiple-tags-with-dom-crawler\/","description":"<p>You can find HTML elements by multiple tags with <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> by pairing the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filter<\/code> method<\/a> with a <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Glossary\/CSS_Selector\" target=\"_blank\" >CSS selector<\/a> that includes multiple tag names separated by commas. Here's an example that loads <a href=\"https:\/\/www.scrapingbee.com\/\" target=\"_blank\" >ScrapingBee's homepage<\/a> using <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/overview.html\" target=\"_blank\" >Guzzle<\/a> and then prints the text of all <code>h1<\/code> and <code>h2<\/code> tags using Dom Crawler:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Create a client to make the HTTP request\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">\\GuzzleHttp\\Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/scrapingbee.com\/&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> (<span style=\"color:#a6e22e\">string<\/span>) $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all h1 and h2 headings on the page\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$headings <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;h1, h2&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the headings and print their text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($headings <span style=\"color:#66d9ef\">as<\/span> $element) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $element<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">textContent<\/span> <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Tired of getting blocked while scraping the web?\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Render your web page as if it were a real browser.\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Render JavaScript to scrape any website.\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Rotate proxies to bypass rate limiting.\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Simple, transparent pricing.\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Developers are asking...\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Who are we?\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Contact us\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Ready to get started?\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find sibling HTML nodes using DOM Crawler and PHP?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-sibling-html-nodes-using-dom-crawler-and-php\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-find-sibling-html-nodes-using-dom-crawler-and-php\/","description":"<p>You can find sibling HTML nodes using <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> and PHP by utilizing the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-traversing\" target=\"_blank\" ><code>siblings<\/code> method<\/a> of a <code>Crawler<\/code> object. Here is some sample code that extracts the first <code>p<\/code> node, then extracts its siblings using the <code>siblings<\/code> method, and finally loops over these sibling nodes and prints their text content:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&lt;&lt;&lt;<\/span><span style=\"color:#e6db74\">EOD<\/span><span style=\"color:#e6db74\">\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;This is the first paragraph.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;This is the second paragraph.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;This is the third paragraph.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"><\/span><span style=\"color:#e6db74\">EOD<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find the first p element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$pElement <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;p&#39;<\/span>)<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">first<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all sibling elements of the p element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$siblings <span style=\"color:#f92672\">=<\/span> $pElement<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">siblings<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the siblings and print their text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($siblings <span style=\"color:#66d9ef\">as<\/span> $sibling) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $sibling<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">textContent<\/span> <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is the second paragraph.\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is the third paragraph.\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to ignore SSL certificate error with Guzzle?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-to-ignore-ssl-certificate-error-with-guzzle\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-to-ignore-ssl-certificate-error-with-guzzle\/","description":"<p>You can easily ignore SSL certificate errors with <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/index.html\" target=\"_blank\" >Guzzle<\/a> by setting the <code>verify<\/code> option to <code>false<\/code> while creating a new Guzzle Client object.<\/p>\n<p>Here is some sample code that creates a new Guzzle client with <code>verify<\/code> set to <code>false<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Client<\/span>([<span style=\"color:#e6db74\">&#39;verify&#39;<\/span> <span style=\"color:#f92672\">=&gt;<\/span> <span style=\"color:#66d9ef\">false<\/span>]);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">get<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/example.com\/&#39;<\/span>);\n<\/span><\/span><\/code><\/pre><\/div><p>You can read more about the <code>verify<\/code> option in the <a href=\"https:\/\/docs.guzzlephp.org\/en\/5.3\/clients.html#verify\" target=\"_blank\" >official docs<\/a>.<\/p>\n<p>Do keep in mind that disabling SSL verification can compromise security and should be used with caution. It is generally recommended to only disable SSL verification for testing or development purposes and to enable it in production.<\/p>"},{"title":"How to scrape tables with DOM Crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-scrape-tables-with-dom-crawler\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-scrape-tables-with-dom-crawler\/","description":"<p>You can scrape tables with <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> by combining the regular <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Glossary\/CSS_Selector\" target=\"_blank\" >CSS selectors<\/a> with the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filter<\/code><\/a> and <code>each<\/code> methods to iterate over the rows and cells of the table.<\/p>\n<p>Here is some sample code that demonstrates how to scrape a simple HTML table using DOM Crawler:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&lt;&lt;&lt;<\/span><span style=\"color:#e6db74\">EOD<\/span><span style=\"color:#e6db74\">\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;table&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;th&gt;Name&lt;\/th&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;th&gt;Age&lt;\/th&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;th&gt;Occupation&lt;\/th&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Yasoob&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;35&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Software Engineer&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Pierre&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;28&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Product Manager&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/table&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"><\/span><span style=\"color:#e6db74\">EOD<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find the table element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$table <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;table&#39;<\/span>)<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">first<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the rows of the table\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$table<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;tr&#39;<\/span>)<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">each<\/span>(<span style=\"color:#66d9ef\">function<\/span> ($row, $i) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Loop over the columns of the row\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> $row<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filter<\/span>(<span style=\"color:#e6db74\">&#39;td&#39;<\/span>)<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">each<\/span>(<span style=\"color:#66d9ef\">function<\/span> ($column, $j) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Print the text content of the column\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#66d9ef\">echo<\/span> $column<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">text<\/span>() <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span> });\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Yasoob\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ 35\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Software Engineer\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Pierre\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ 28\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Product Manager\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to select values between two nodes in DOM Crawler and PHP?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-select-values-between-two-nodes-in-dom-crawler-and-php\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/dom-crawler\/how-to-select-values-between-two-nodes-in-dom-crawler-and-php\/","description":"<p>You can select values between two nodes in <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html\" target=\"_blank\" >DOM Crawler<\/a> by using the <a href=\"https:\/\/symfony.com\/doc\/current\/components\/dom_crawler.html#node-filtering\" target=\"_blank\" ><code>filterXPath<\/code> method<\/a> with an XPath expression that selects the nodes between the two nodes you want to use as anchors.<\/p>\n<p>Here is some sample code that prints the text content of all the nodes between the <code>h1<\/code> and <code>h2<\/code> nodes:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">Symfony\\Component\\DomCrawler\\Crawler<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$html <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&lt;&lt;&lt;<\/span><span style=\"color:#e6db74\">EOD<\/span><span style=\"color:#e6db74\">\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;h1&gt;Header 1&lt;\/h1&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;Paragraph 1&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;Paragraph 2&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;h2&gt;Header 2&lt;\/h2&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;Paragraph 3&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"><\/span><span style=\"color:#e6db74\">EOD<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML document\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$crawler <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Crawler<\/span>($html);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all nodes between the h1 and h2 elements\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>$nodesBetweenHeadings <span style=\"color:#f92672\">=<\/span> $crawler<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">filterXPath<\/span>(<span style=\"color:#e6db74\">&#39;\/\/h1\/\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">following-sibling::h2\/\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\tpreceding-sibling::*[\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\t\tpreceding-sibling::h1\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">\t]&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Loop over the nodes and print their text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">foreach<\/span> ($nodesBetweenHeadings <span style=\"color:#66d9ef\">as<\/span> $node) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">echo<\/span> $node<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">textContent<\/span> <span style=\"color:#f92672\">.<\/span> <span style=\"color:#a6e22e\">PHP_EOL<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><\/code><\/pre><\/div><p>The XPath expression used above can be read like this:<\/p>"},{"title":"How to send a POST request in JSON with Guzzle?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-to-send-a-post-request-in-json-with-guzzle\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-to-send-a-post-request-in-json-with-guzzle\/","description":"<p>You can send a POST request\u00a0with JSON data in Guzzle by passing in the JSON data as an array of key-value pairs via the <code>json<\/code> option.<\/p>\n<p>Here is some sample code that sends a request to <a href=\"https:\/\/httpbin.org\/\" target=\"_blank\" >HTTP Bin<\/a> with some sample JSON data:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-php\" data-lang=\"php\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">use<\/span> <span style=\"color:#a6e22e\">GuzzleHttp\\Client<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$client <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">new<\/span> <span style=\"color:#a6e22e\">Client<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>$response <span style=\"color:#f92672\">=<\/span> $client<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">post<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/httpbin.org\/post&#39;<\/span>, [\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#34;json&#34;<\/span> <span style=\"color:#f92672\">=&gt;<\/span> [\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;key1&#39;<\/span> <span style=\"color:#f92672\">=&gt;<\/span> <span style=\"color:#e6db74\">&#39;value1&#39;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;key2&#39;<\/span> <span style=\"color:#f92672\">=&gt;<\/span> <span style=\"color:#e6db74\">&#39;value2&#39;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> ]\n<\/span><\/span><span style=\"display:flex;\"><span>]);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">echo<\/span> $response<span style=\"color:#f92672\">-&gt;<\/span><span style=\"color:#a6e22e\">getBody<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ {\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;args&#34;: {},\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;data&#34;: &#34;{\\&#34;key1\\&#34;:\\&#34;value1\\&#34;,\\&#34;key2\\&#34;:\\&#34;value2\\&#34;}&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;files&#34;: {},\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;form&#34;: {},\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;headers&#34;: {\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;Content-Length&#34;: &#34;33&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;Content-Type&#34;: &#34;application\/json&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;Host&#34;: &#34;httpbin.org&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;User-Agent&#34;: &#34;GuzzleHttp\/7&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;X-Amzn-Trace-Id&#34;: &#34;Root=1-63fa252d-60bf3c1b2258ff5903bdd116&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ },\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;json&#34;: {\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;key1&#34;: &#34;value1&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;key2&#34;: &#34;value2&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ },\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;origin&#34;: &#34;119.73.117.169&#34;,\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ &#34;url&#34;: &#34;https:\/\/httpbin.org\/post&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ }\n<\/span><\/span><\/span><\/code><\/pre><\/div><p>You can read more about it in the <a href=\"http:\/\/docs.guzzlephp.org\/en\/latest\/request-options.html#json\" target=\"_blank\" >official Guzzle docs<\/a>.<\/p>"},{"title":"How to use proxy with authentication with Guzzle?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-to-use-proxy-with-authentication-with-guzzle\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/how-to-use-proxy-with-authentication-with-guzzle\/","description":"<p>You can use an authenticated proxy with <a href=\"https:\/\/docs.guzzlephp.org\/en\/stable\/index.html\" target=\"_blank\" >Guzzle<\/a> very easily. You just need to pass in a <code>proxy<\/code> option when either creating a new <code>Client<\/code> object or when making the actual request. If the proxy uses authentication, just include the authentication options as part of the proxy string.<\/p>\n<p>Here is what a proxy string with authentication parameters will look like:<\/p>\n<pre tabindex=\"0\"><code>http:\/\/username:password@proxyendpoint.com:port\n<\/code><\/pre><p>Make sure to replace the <code>username<\/code>, <code>password<\/code>, <code>proxyendpoint.com<\/code>, and <code>port<\/code> with the required values based on the proxy you are using.<\/p>"},{"title":"Is Guzzle a built-in PHP library?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/is-guzzle-a-built-in-php-library\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/is-guzzle-a-built-in-php-library\/","description":"<p>No, <a href=\"https:\/\/github.com\/guzzle\/guzzle\" target=\"_blank\" >Guzzle<\/a> is not a built-in PHP library. It is a third-party library that needs to be installed separately.<\/p>\n<p>To use Guzzle in your PHP application, you need to first install it using a package manager such as <a href=\"https:\/\/getcomposer.org\/\" target=\"_blank\" >Composer<\/a>, which is the recommended way of managing dependencies in PHP projects. Once you have installed Guzzle, you can then include it in your PHP code and use its API to send HTTP requests and handle responses.<\/p>"},{"title":"Is PHP Guzzle deprecated?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/is-php-guzzle-deprecated\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/is-php-guzzle-deprecated\/","description":"<p>No, <a href=\"https:\/\/github.com\/guzzle\/guzzle\" target=\"_blank\" >Guzzle<\/a> is not deprecated. It is still actively maintained and supported by the developers.<\/p>\n<p>Although there have been some changes in the PHP ecosystem in recent years, such as the introduction of the PSR-7 HTTP message interfaces, Guzzle has adapted to these changes and continues to provide a modern and flexible API for working with HTTP.<\/p>\n<p>If you don't know what Guzzle is, it is a popular PHP library for sending HTTP requests and handling responses, and it is widely used in many PHP projects. The library provides a robust and feature-rich API for interacting with HTTP services, making it an essential tool for PHP developers who need to work with web APIs or other HTTP-based services.<\/p>"},{"title":"What is Guzzle used for in PHP?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/what-is-guzzle-used-for-in-php\/","pubDate":"Fri, 24 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/guzzle\/what-is-guzzle-used-for-in-php\/","description":"<p><a href=\"https:\/\/github.com\/guzzle\/guzzle\" target=\"_blank\" >Guzzle<\/a> is a popular PHP library for sending HTTP requests and handling responses. It provides a flexible and feature-rich API for working with HTTP services and APIs, making it a valuable tool for many PHP developers.<\/p>\n<p>Here are some of the main use cases for Guzzle in PHP:<\/p>\n<ol>\n<li>Sending HTTP requests: Guzzle allows you to easily send HTTP requests using a variety of HTTP methods (GET, POST, PUT, DELETE, etc.) and set headers, query parameters, request bodies, and other options.<\/li>\n<li>Handling HTTP responses: Guzzle provides a powerful and flexible API for handling HTTP responses, including support for response headers, status codes, response bodies, and error handling.<\/li>\n<li>Working with web APIs: Guzzle is often used to interact with web APIs, allowing you to easily consume and manipulate data from a remote API in your PHP application.<\/li>\n<li>Testing HTTP services: Guzzle can also be used for testing HTTP services and APIs, providing a convenient and flexible way to write automated tests for your application's HTTP interactions.<\/li>\n<\/ol>\n<p>If you are working with any sort of remote HTTP requests in your PHP application, chances are that you will end up using Guzzle.<\/p>"},{"title":"Can I use XPath selectors in Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/can-i-use-xpath-selectors-in-cheerio\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/can-i-use-xpath-selectors-in-cheerio\/","description":"<p>No, you can not use XPath selectors in <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a>. According to <a href=\"https:\/\/github.com\/cheeriojs\/cheerio\/issues\/152\" target=\"_blank\" >these<\/a> <a href=\"https:\/\/github.com\/cheeriojs\/cheerio\/issues\/1098\" target=\"_blank\" >GitHub issues<\/a>, there is no plan to support XPaths in Cheerio.<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/github-issue.png\" alt=\"GitHub Issue\"><\/p>\n<p>However, if you simply want to work with XML documents and parse those using Cheerio, it is possible. Here is some sample code for parsing XML using Cheerio.<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">xml<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;bookstore&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;book category=&#34;web&#34;&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;title lang=&#34;en&#34;&gt;Practical Python Projects&lt;\/title&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;author&gt;Yasoob Khalid&lt;\/author&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;year&gt;2022&lt;\/year&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;price&gt;39.95&lt;\/price&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/book&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;book category=&#34;web&#34;&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;title lang=&#34;en&#34;&gt;Intermediate Python&lt;\/title&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;author&gt;Yasoob Khalid&lt;\/author&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;year&gt;2018&lt;\/year&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;price&gt;29.99&lt;\/price&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/book&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/bookstore&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the XML document as a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">xml<\/span>, { <span style=\"color:#a6e22e\">xml<\/span><span style=\"color:#f92672\">:<\/span> <span style=\"color:#66d9ef\">true<\/span> });\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Select all book titles \n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">titles<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;book &gt; title&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Print the text content of each title\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">titles<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">title<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">title<\/span>).<span style=\"color:#a6e22e\">text<\/span>());\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Practical Python Projects\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Intermediate Python\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find elements without specific attributes in Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-elements-without-specific-attributes-in-cheerio\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-elements-without-specific-attributes-in-cheerio\/","description":"<p>You can find elements without specific attributes in <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> by using the <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Web\/CSS\/:not\" target=\"_blank\" ><code>:not<\/code> CSS pseudo-class<\/a> and the attribute selector.<\/p>\n<p>Here's an example that demonstrates how to find all <code>div<\/code> elements without a <code>class<\/code> attribute using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div class=&#34;content&#34;&gt;This div has a class attribute&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;This div does not have a class attribute&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div class=&#34;footer&#34;&gt;This div also has a class attribute&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all div elements without a class attribute using the :not pseudo-class and the attribute selector\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">divsWithoutClass<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;div:not([class])&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Iterate over each div element without a class attribute and print its text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">divsWithoutClass<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">div<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">div<\/span>).<span style=\"color:#a6e22e\">text<\/span>());\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output: \n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This div does not have a class attribute\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find HTML elements by attribute using Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-html-elements-by-attribute-using-cheerio\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-html-elements-by-attribute-using-cheerio\/","description":"<p>You can find HTML elements by attribute in <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> using the attribute selector.<\/p>\n<p>Here's some sample code that demonstrates how to find all div elements with a data-attribute of &quot;example&quot; using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div data-example=&#34;1&#34;&gt;This div has a data-example attribute with a value of 1&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div data-example=&#34;2&#34;&gt;This div has a data-example attribute with a value of 2&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;This div does not have a data-example attribute&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all div elements with a data-example attribute of &#34;1&#34; using the attribute selector\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">divsWithAttribute<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;div[data-example=&#34;1&#34;]&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Iterate over each div element with a data-example attribute of &#34;1&#34; and print its text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">divsWithAttribute<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">div<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">div<\/span>).<span style=\"color:#a6e22e\">text<\/span>());\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output: \n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This div has a data-example attribute with a value of 1\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find HTML elements by class with Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-html-elements-by-class-with-cheerio\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-html-elements-by-class-with-cheerio\/","description":"<p>You can find HTML elements by class in <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> by using the class selector.<\/p>\n<p>Here's some sample code that demonstrates how to find all <code>div<\/code> elements with a <code>class<\/code> of <code>example<\/code> using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div class=&#34;example&#34;&gt;This div has a class of example&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div class=&#34;example&#34;&gt;This div also has a class of example&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;This div does not have a class of example&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all div elements with a class of &#34;example&#34; using the class selector\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">divsWithClass<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;div.example&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Iterate over each div element with a class of &#34;example&#34; and print its text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">divsWithClass<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">div<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">div<\/span>).<span style=\"color:#a6e22e\">text<\/span>());\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output: \n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This div has a class of example\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This div also has a class of example\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find HTML elements by multiple tags with Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-html-elements-by-multiple-tags-with-cheerio\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-html-elements-by-multiple-tags-with-cheerio\/","description":"<p>You can find HTML elements by multiple tags in <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> by separating the tag selectors with a <code>,<\/code>.<\/p>\n<p>Here's some sample code that demonstrates how to find all <code>div<\/code> and <code>span<\/code> elements using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;This is a div element&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;span&gt;This is a span element&lt;\/span&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;This is another div element&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all div and span elements\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">divsAndSpans<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;div, span&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Iterate over each div and span element and print its text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">divsAndSpans<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">element<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">element<\/span>).<span style=\"color:#a6e22e\">text<\/span>());\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is a div element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is a span element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is another div element\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find sibling HTML nodes using Cheerio and NodeJS?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-sibling-html-nodes-using-cheerio-and-nodejs\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-find-sibling-html-nodes-using-cheerio-and-nodejs\/","description":"<p>You can find sibling HTML nodes using <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> and Node.js by utilizing the <code>siblings<\/code> method of a Cheerio object.<\/p>\n<p>Here's some sample code that demonstrates how to find all sibling elements of a given element using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;This is the first paragraph.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;This is the second paragraph.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;This is the third paragraph.&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Select the second paragraph element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">secondParagraph<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;p:nth-of-type(2)&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Find all sibling elements of the second paragraph using the siblings method\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">siblingElements<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">secondParagraph<\/span>.<span style=\"color:#a6e22e\">siblings<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Iterate over each sibling element and print its text content\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">siblingElements<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">element<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">element<\/span>).<span style=\"color:#a6e22e\">text<\/span>());\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is the first paragraph.\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ This is the third paragraph.\n<\/span><\/span><\/span><\/code><\/pre><\/div><p><strong>Note:<\/strong> <code>p:nth-of-type(2)<\/code> is used to select the second paragraph element. You can replace it with any other appropriate selector.<\/p>"},{"title":"How to scrape tables with Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-scrape-tables-with-cheerio\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-scrape-tables-with-cheerio\/","description":"<p>You can scrape tables with <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> by combining the regular <a href=\"https:\/\/developer.mozilla.org\/en-US\/docs\/Glossary\/CSS_Selector\" target=\"_blank\" >CSS selectors<\/a> with the <a href=\"https:\/\/cheerio.js.org\/docs\/basics\/traversing\/#find\" target=\"_blank\" ><code>find<\/code><\/a> and <code>each<\/code> methods to iterate over the rows and cells of the table.<\/p>\n<p>Here's some sample code that demonstrates how to scrape a simple HTML table using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;table&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;th&gt;Name&lt;\/th&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;th&gt;Age&lt;\/th&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;th&gt;Occupation&lt;\/th&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Yasoob&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;35&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Software Engineer&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Pierre&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;28&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;td&gt;Product Manager&lt;\/td&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/tr&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/table&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Select the table element\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">table<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;table&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Initialize an empty array to store the table data\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">tableData<\/span> <span style=\"color:#f92672\">=<\/span> [];\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Iterate over each row of the table using the find and each methods\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">table<\/span>.<span style=\"color:#a6e22e\">find<\/span>(<span style=\"color:#e6db74\">&#39;tr&#39;<\/span>).<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">row<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Initialize an empty object to store the row data\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">rowData<\/span> <span style=\"color:#f92672\">=<\/span> {};\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Iterate over each cell of the row using the find and each methods\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">row<\/span>).<span style=\"color:#a6e22e\">find<\/span>(<span style=\"color:#e6db74\">&#39;td, th&#39;<\/span>).<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">j<\/span>, <span style=\"color:#a6e22e\">cell<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Add the cell data to the row data object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#a6e22e\">rowData<\/span>[<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">cell<\/span>).<span style=\"color:#a6e22e\">text<\/span>()] <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">j<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span> });\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Add the row data to the table data array\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#a6e22e\">tableData<\/span>.<span style=\"color:#a6e22e\">push<\/span>(<span style=\"color:#a6e22e\">rowData<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>});\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Print the table data\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">tableData<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ [\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ { Name: 0, Age: 1, Occupation: 2 },\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ { &#39;35&#39;: 1, Yasoob: 0, &#39;Software Engineer&#39;: 2 },\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ { &#39;28&#39;: 1, Pierre: 0, &#39;Product Manager&#39;: 2 }\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ ]\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to select values between two nodes in Cheerio and Node.js?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-select-values-between-two-nodes-in-cheerio-and-nodejs\/","pubDate":"Thu, 23 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-to-select-values-between-two-nodes-in-cheerio-and-nodejs\/","description":"<p>You can select values between two nodes in <a href=\"https:\/\/cheerio.js.org\/\" target=\"_blank\" >Cheerio<\/a> and Node.js by making use of a combination of the <a href=\"https:\/\/cheerio.js.org\/docs\/basics\/traversing\/#nextuntil-and-prevuntil\" target=\"_blank\" ><code>nextUntil<\/code><\/a> and <code>map<\/code> methods to iterate over the elements between the two nodes and extract the desired values.<\/p>\n<p>Here's an example that demonstrates how to select values between two nodes in a simple HTML structure using Cheerio:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">html<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">`\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;h1&gt;Header 1&lt;\/h1&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;Paragraph 1&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;Paragraph 2&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;h2&gt;Header 2&lt;\/h2&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;p&gt;Paragraph 3&lt;\/p&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">`<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Load the HTML content into a Cheerio object\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Select the first and second nodes using the CSS selector\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">startNode<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;h1&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">endNode<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;h2&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Use the nextUntil method to select all elements between the start and end nodes\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">betweenNodes<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">startNode<\/span>.<span style=\"color:#a6e22e\">nextUntil<\/span>(<span style=\"color:#a6e22e\">endNode<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Use the map method to extract the text content of the elements\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">valuesBetweenNodes<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">betweenNodes<\/span>.<span style=\"color:#a6e22e\">map<\/span>((<span style=\"color:#a6e22e\">i<\/span>, <span style=\"color:#a6e22e\">el<\/span>) =&gt; <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">el<\/span>).<span style=\"color:#a6e22e\">text<\/span>()).<span style=\"color:#a6e22e\">get<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Print the selected values\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span><span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">valuesBetweenNodes<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ Output:\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\">\/\/ [ &#39;Paragraph 1&#39;, &#39;Paragraph 2&#39; ]\n<\/span><\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to load local files in Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-load-local-files-in-puppeteer\/","pubDate":"Wed, 15 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-load-local-files-in-puppeteer\/","description":"<p>You can load local files in Puppeteer by using the same <code>page.goto<\/code> method that you use for URLs, but you need to provide it with the file URL using the file protocol (<code>file:\/\/<\/code>). The file path must be an absolute path.<\/p>\n<p>Here's some example code that opens a file located at <code>\/Users\/yasoob\/Desktop\/ScrapingBee\/index.html<\/code>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">puppeteer<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;puppeteer&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">filePath<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#39;file:\/\/&#39;<\/span><span style=\"color:#f92672\">+<\/span><span style=\"color:#e6db74\">&#39;\/Users\/yasoob\/Desktop\/ScrapingBee\/index.html&#39;<\/span>;\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">async<\/span> <span style=\"color:#66d9ef\">function<\/span> <span style=\"color:#a6e22e\">loadLocalFile<\/span>() {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">browser<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">puppeteer<\/span>.<span style=\"color:#a6e22e\">launch<\/span>({\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">headless<\/span><span style=\"color:#f92672\">:<\/span> <span style=\"color:#66d9ef\">false<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> });\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">page<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">browser<\/span>.<span style=\"color:#a6e22e\">newPage<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">page<\/span>.<span style=\"color:#66d9ef\">goto<\/span>(<span style=\"color:#a6e22e\">filePath<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#a6e22e\">loadLocalFile<\/span>();\n<\/span><\/span><\/code><\/pre><\/div><p>It's important to note that loading local files in most browsers is subject to the same-origin policy, which means that the loaded file should come from the same origin as the web page running the JavaScript code. Additionally, it is important to make sure that the path being accessed is accessible by the running script. You can read more about these security implications in <a href=\"https:\/\/stackoverflow.com\/questions\/29371600\/chrome-browser-security-implications-of-allow-file-access-from-files\" target=\"_blank\" >this StackOverflow answer<\/a>.<\/p>"},{"title":"How to run Puppeteer in Jupyter notebooks?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-run-puppeteer-in-jupyter-notebooks\/","pubDate":"Wed, 15 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-run-puppeteer-in-jupyter-notebooks\/","description":"<p>You can run Puppeteer in Jupyter notebook by using a JavaScript kernel instead of the default Python one. There is the famous <a href=\"https:\/\/github.com\/n-riesco\/ijavascript\" target=\"_blank\" >IJavaScript<\/a> kernel but that does not work with Puppeteer. The reason is that Puppeteer is async and needs a kernel that supports that. You can instead use <a href=\"https:\/\/www.npmjs.com\/package\/ijavascript-await\" target=\"_blank\" >this patched version<\/a> of the IJavaScript kernel that adds this async support.<\/p>\n<p>Assuming that you already have <code>jupyter<\/code> installed, you can install the <a href=\"https:\/\/www.npmjs.com\/package\/ijavascript-await\" target=\"_blank\" >patched IJavaScript<\/a> kernel using npm:<\/p>"},{"title":"How to wait for page to load in Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-wait-for-page-to-load-in-puppeteer\/","pubDate":"Wed, 15 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-wait-for-page-to-load-in-puppeteer\/","description":"<p>You can wait for the page to load in Puppeteer by using the <code>waitForSelector<\/code> method. This will pause execution until a specific element shows up on the page and indicates that the page has fully loaded. This feature is extremely helpful while performing web scraping on dynamic websites.<\/p>\n<p>Here is some sample code that opens up <a href=\"https:\/\/scrapingbee.com\" target=\"_blank\" >ScrapingBee homepage<\/a> and waits for the content section to show up:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">puppeteer<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;puppeteer&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">async<\/span> <span style=\"color:#66d9ef\">function<\/span> <span style=\"color:#a6e22e\">waitForSelector<\/span>() {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">browser<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">puppeteer<\/span>.<span style=\"color:#a6e22e\">launch<\/span>({\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">headless<\/span><span style=\"color:#f92672\">:<\/span> <span style=\"color:#66d9ef\">false<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> });\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">page<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">browser<\/span>.<span style=\"color:#a6e22e\">newPage<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">page<\/span>.<span style=\"color:#66d9ef\">goto<\/span>(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">await<\/span> <span style=\"color:#a6e22e\">page<\/span>.<span style=\"color:#a6e22e\">waitForSelector<\/span>(<span style=\"color:#e6db74\">&#39;#content&#39;<\/span>, { <span style=\"color:#a6e22e\">timeout<\/span><span style=\"color:#f92672\">:<\/span> <span style=\"color:#ae81ff\">5_000<\/span> });\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Do whatever you want with the page next\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span>}\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#a6e22e\">waitForSelector<\/span>();\n<\/span><\/span><\/code><\/pre><\/div><p>You can read more about the <code>waitForSelector<\/code> API in the <a href=\"https:\/\/pptr.dev\/api\/puppeteer.page.waitforselector\" target=\"_blank\" >official docs<\/a>.<\/p>"},{"title":"Who owns Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/who-owns-playwright\/","pubDate":"Wed, 15 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/who-owns-playwright\/","description":"<p>Playwright is an open-source web automation framework, which is developed and maintained by Microsoft, as well as a community of contributors from all around the world. The development of Playwright takes place on GitHub and the contributors have to sign a <a href=\"https:\/\/cla.opensource.microsoft.com\/\" target=\"_blank\" >one-time Contributor License Agreement<\/a> before making contributions to the project. The project has an Apache 2.0 License which allows users to freely use the framework in their private as well as commercial projects.<\/p>"},{"title":"Why do we need Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/why-do-we-need-playwright\/","pubDate":"Wed, 15 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/why-do-we-need-playwright\/","description":"<p>Playwright is a web automation framework that allows developers to automate web applications and browsers, much like Selenium and Puppeteer. It provides a powerful, flexible, and reliable way to automate end-to-end testing, browser automation, and web scraping in <a href=\"https:\/\/playwright.dev\/python\/docs\/intro\" target=\"_blank\" >Python<\/a>, <a href=\"https:\/\/playwright.dev\/dotnet\/docs\/intro\" target=\"_blank\" >.NET<\/a>, <a href=\"https:\/\/playwright.dev\/java\/docs\/intro\" target=\"_blank\" >Java<\/a>, or <a href=\"https:\/\/github.com\/microsoft\/playwright\" target=\"_blank\" >Node.js<\/a>.<\/p>\n<p>One of the major features of Playwright is its ability to support multiple web browsers, such as Chromium, Firefox, and Webkit-based Safari, out of the box, which allows developers to test their web apps on different browsers with minimal effort.<\/p>"},{"title":"How do I read a JSON in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/how-do-i-read-a-json-in-python\/","pubDate":"Sat, 11 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/how-do-i-read-a-json-in-python\/","description":"<p>To read a JSON file in Python, you can use the built-in <code>json<\/code> module. Here is a sample <code>file.json<\/code> file:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-json\" data-lang=\"json\"><span style=\"display:flex;\"><span>{\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;name&#34;<\/span>: <span style=\"color:#e6db74\">&#34;John Doe&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;age&#34;<\/span>: <span style=\"color:#ae81ff\">32<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;address&#34;<\/span>: {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;street&#34;<\/span>: <span style=\"color:#e6db74\">&#34;123 Main St&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;city&#34;<\/span>: <span style=\"color:#e6db74\">&#34;Anytown&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;state&#34;<\/span>: <span style=\"color:#e6db74\">&#34;CA&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> },\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><\/code><\/pre><\/div><p>And here is some sample Python code for reading this file:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> json\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> open(<span style=\"color:#e6db74\">&#39;file.json&#39;<\/span>, <span style=\"color:#e6db74\">&#39;r&#39;<\/span>) <span style=\"color:#66d9ef\">as<\/span> json_file:\n<\/span><\/span><span style=\"display:flex;\"><span> data <span style=\"color:#f92672\">=<\/span> json<span style=\"color:#f92672\">.<\/span>load(json_file)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>print(data[<span style=\"color:#e6db74\">&#34;name&#34;<\/span>])\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Output: John Doe<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>The <code>json.loads<\/code> method takes in a JSON string and converts it into a Python dictionary. You can read more about the JSON library in the <a href=\"https:\/\/docs.python.org\/3\/library\/json.html\" target=\"_blank\" >official Python docs<\/a>.<\/p>"},{"title":"How does JSON parser work?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/how-does-json-parser-work\/","pubDate":"Sat, 11 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/how-does-json-parser-work\/","description":"<p>A JSON (JavaScript Object Notation) parser is a program that reads a JSON-formatted text file and converts it into a more easily usable data structure, such as a dictionary or a list in Python or an object in JavaScript.<\/p>\n<p>The parser works by tokenizing the input JSON text, breaking it up into individual elements such as keys, values, and punctuation. It then builds a data structure, such as a dictionary, a list, or an object, that corresponds to the structure of the input JSON.<\/p>"},{"title":"What is a JSON parser?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/what-is-a-json-parser\/","pubDate":"Sat, 11 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/json\/what-is-a-json-parser\/","description":"<p>A JSON parser is a software component or library that reads a JSON (JavaScript Object Notation) formatted text file and converts it into a more usable data structure, such as a dictionary or a list in Python, or an object in JavaScript.<\/p>\n<p>JSON is a text-based, human-readable format for representing structured data. It is commonly used for transmitting data between a server and a web application or for storing data in a file or a database. A JSON parser provides a way to read JSON text and convert it into a more usable data structure, making it easier to access and manipulate the data.<\/p>"},{"title":"Are HTTP websites safe?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/http\/are-http-websites-safe\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/http\/are-http-websites-safe\/","description":"<p>HTTP websites are not as secure as HTTPS websites. In HTTP, the communication between the client and server is not encrypted, so it's possible for someone to intercept and view sensitive information like passwords and credit card numbers. On the other hand, HTTPS encrypts the communication, providing a secure connection and protecting the privacy of users. It's recommended to use HTTPS for websites that handle sensitive information. Moreover, with the availability of free SSL certificates by <a href=\"https:\/\/letsencrypt.org\" target=\"_blank\" >Let's Encrypt<\/a>, there is very little reason to still use naked HTTP.<\/p>"},{"title":"Does WebCrawler still exist?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/does-webcrawler-still-exist\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/does-webcrawler-still-exist\/","description":"<p>WebCrawler still exists and is chugging along. According <a href=\"https:\/\/en.wikipedia.org\/wiki\/WebCrawler\" target=\"_blank\" >to Wikipedia<\/a>, the website last changed hands in 2016 and the homepage was redesigned in 2018.<\/p>\n<p>Since then it has been working under the same company: System1.<\/p>\n<p>It is not as popular as it used to be, however, you can still search for information on the platform and get relevant results.<\/p>\n<p>According to SimilarWeb, WebCrawler has only 240,000 monthly visitors, making it not even in the top 100,000 websites in the world.<\/p>"},{"title":"How do I hide my IP address for free?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/how-do-i-hide-my-ip-address-for-free\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/how-do-i-hide-my-ip-address-for-free\/","description":"<p>There are several ways to hide your IP address for free:<\/p>\n<ol>\n<li><strong>Use a free proxy server:<\/strong> You can use a free proxy server to hide your IP address and browse the web anonymously. However, it is important to keep in mind that free proxies can be risky to use as they may be operated by malicious individuals who could use them to snoop and steal your personal data or compromise your security. You can get a free proxy from the <a href=\"http:\/\/free-proxy.cz\/en\/proxylist\/country\/all\/socks5\/ping\/all\" target=\"_blank\" >Free Proxy<\/a> or similar websites.<\/li>\n<li><strong>Use a free VPN (Virtual Private Network):<\/strong> Some VPN services offer a free version that allows you to hide your IP address, encrypt your internet traffic, and browse the web securely. However, free VPN services may have data usage or speed limitations and may not be as secure as paid services. You can use <a href=\"https:\/\/protonvpn.com\/\" target=\"_blank\" >ProtonVPN<\/a>. It is provided by a reliable company with a good track record and has a free plan.<\/li>\n<li><strong>Use the Tor browser:<\/strong> The Tor browser is a free, open-source browser that routes your internet traffic through a series of servers to hide your IP address and provide anonymity. The Tor browser is highly secure but can be slower than a proxy or VPN as it routes the traffic through multiple successive servers (like the layers of an onion :)). You can download Tor <a href=\"https:\/\/www.torproject.org\/download\/\" target=\"_blank\" >from here<\/a>.<\/li>\n<\/ol>\n<p>There is also an option to use the freemium plans of paid proxy services like <a href=\"https:\/\/scrapingbee.com\" target=\"_blank\" >ScrapingBee<\/a>. You can only make a limited amount of proxied requests using the freemium plans of such services but if your needs are small then this might suffice.<\/p>"},{"title":"Is Google a web crawler?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/is-google-a-web-crawler\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/is-google-a-web-crawler\/","description":"<p>Google is most definitely a web crawler. They operate a web crawler with the name of Googlebot which searches for new websites, crawls them, and saves them in the massive search engine database. This is how Google powers its search engine and keeps it fresh with results from new websites. You can learn more about Googlebot over at Google's <a href=\"https:\/\/developers.google.com\/search\/docs\/crawling-indexing\/googlebot\" target=\"_blank\" >documentation website<\/a>.<\/p>\n<p>So yes, Google is a web crawler, but it is not WebCrawler, as WebCrawler is a company, also crawling the web, but not Google.<\/p>"},{"title":"Is it better to use IPv6 or IPv4?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/is-it-better-to-use-ipv6-or-ipv4\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/is-it-better-to-use-ipv6-or-ipv4\/","description":"<p>It is generally considered better to use IPv6, which is a newer and latest version of the Internet Protocol (IP) after IPv4. There are several reasons for this:<\/p>\n<ol>\n<li><strong>Larger Address Space:<\/strong> IPv6 has a much larger address space than IPv4, which allows for a much larger number of unique IP addresses. This is important as the increasing number of devices connecting to the internet is rapidly depleting the available IPv4 addresses.<\/li>\n<li><strong>Improved Security:<\/strong> IPv6 includes built-in security features, such as IPsec (Internet Protocol Security) encryption, which helps to protect against attacks and improve the overall security of the internet.<\/li>\n<li><strong>Better Support for Mobile Devices:<\/strong> IPv6 has better support for mobile devices and enables easier network transitions for mobile users, allowing for smoother and more efficient mobile connectivity. This is possible because IPv6 gets rid of the NAT and allows for a <a href=\"https:\/\/www.extremetech.com\/mobile\/145765-ipv6-makes-mobile-networks-faster\" target=\"_blank\" >few different optimizations<\/a>.<\/li>\n<li><strong>More Efficient Routing:<\/strong> IPv6 uses simpler and more efficient routing algorithms, which helps to reduce network congestion and improve network performance.<\/li>\n<\/ol>\n<p>That being said, IPv4 is still widely used and many networks continue to use both IPv4 and IPv6, with IPv4 being used as a fallback for devices that do not support IPv6. The transition to IPv6 is ongoing and is expected to take several more years to complete.<\/p>"},{"title":"Is it legal to use proxies?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/is-it-legal-to-use-proxies\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/is-it-legal-to-use-proxies\/","description":"<p>Using a proxy server in and of itself is not illegal. However, the legality of using a proxy depends on how it is being used and in which jurisdiction.<\/p>\n<p>In some countries, using a proxy to bypass internet censorship or access restricted websites may be illegal. In other countries, the use of a proxy to protect privacy is allowed and protected by law. Some of the countries which completely or partially block proxies and VPNs include:<\/p>"},{"title":"Is SOCKS5 the same as VPN?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/is-socks5-same-as-vpn\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/is-socks5-same-as-vpn\/","description":"<p>No, SOCKS5 and VPN are not the same things.<\/p>\n<p>SOCKS5 is a proxy protocol that provides routing for network traffic, allowing clients to bypass network restrictions and access the internet securely and anonymously. SOCKS5 does not provide encryption for the data being sent through the proxy, meaning that your internet traffic can be intercepted and monitored by third parties. However, due to no encryption, it might be slightly faster than a VPN.<\/p>"},{"title":"Should I use IPv6 at home?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/should-i-use-ipv6-at-home\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/should-i-use-ipv6-at-home\/","description":"<p>Yes, you can use IPv6 at home. In fact, it is recommended to use IPv6 as it is the future of the internet and provides many benefits over IPv4, such as a larger address space, improved security, and better network auto-configuration capabilities. However, whether or not you can or should use IPv6 at home depends on a few factors:<\/p>\n<ol>\n<li><strong>Availability:<\/strong> IPv6 is not yet widely available and many home internet service providers do not support it. Before you can use IPv6 at home, you need to ensure that your internet service provider supports it and that your home network is configured to use it.<\/li>\n<li><strong>Devices:<\/strong> Many devices, such as smartphones, laptops, and smart home devices, already support IPv6, but others, such as older devices, may not. You should check to see if all the devices on your home network support IPv6 and if not, whether they can be upgraded to support it.<\/li>\n<li><strong>Performance:<\/strong> IPv6 can provide faster and more reliable connections, but this will depend on your internet service provider and the quality of your network connection and your ISP's support for IPv6. Simply upgrading to IPv6 will not magically solve all of the performance issues if there is a separate underlying cause.<\/li>\n<\/ol>\n<p>So as you see, IPv6 is the preferred protocol to use but you may have some dependencies that will prevent a complete adoption of this newer IP protocol.<\/p>"},{"title":"What are examples of proxies?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/what-are-examples-of-proxies\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/what-are-examples-of-proxies\/","description":"<p>A proxy is a server that acts as an intermediary between a client and a server, forwarding requests from clients to servers and vice versa. Here are some examples of different types of proxies used in web scraping:<\/p>\n<ol>\n<li><strong>Data Center Proxies:<\/strong> These are proxy servers that are owned and operated by data centers. They are used to hide the user's real IP address and provide a different IP address from the data center's pool of IP addresses. They can be sourced from regional data centers or from AWS, Google, and other similar cloud providers. Data center proxies are typically faster than residential proxies but are easily detectable by websites and services that block proxy usage.<\/li>\n<li><strong>Residential Proxies:<\/strong> These are proxy servers that use residential IP addresses provided by internet service providers (ISPs). They are considered better than data center proxies because they provide a real IP address from a physical location and are less likely to be detected as a proxy. However, they tend to be slower and more expensive than data center proxies.<\/li>\n<li><strong>4G Proxies:<\/strong> These are proxy servers that use mobile 4G network IP addresses. They are similar to residential proxies, providing a real IP address from a physical location, but they also offer the added benefit of mobility. However, the speed and reliability of 4G proxies can vary depending on the proxy location and network conditions.<\/li>\n<\/ol>\n<p>If you ever have to use proxies, make sure you get them from a reliable provider like <a href=\"https:\/\/scrapingbee.com\" target=\"_blank\" >ScrapingBee<\/a> as some providers in the market source these proxies using illegal and shady tactics.<\/p>"},{"title":"What are the 3 types of HTTP cookies?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/http\/what-are-the-three-types-of-http-cookies\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/http\/what-are-the-three-types-of-http-cookies\/","description":"<p>The three types of HTTP cookies are:<\/p>\n<ol>\n<li><strong>Session Cookies:<\/strong> These are temporary cookies that are stored in the browser's memory only while a user is on a website. Once the user closes the browser, the session cookie is deleted.<\/li>\n<li><strong>Persistent Cookies:<\/strong> These are also known as first-party cookies. They have an expiration date and are stored on the user's device for a specified period of time, even after the user has closed the browser.<\/li>\n<li><strong>Third-party Cookies:<\/strong> These are also referred to as tracking cookies. They are set by a domain other than the one the user is visiting. For example, a user visiting a website might see ads served by an ad network that uses third-party cookies to track the user's behavior and show relevant ads.<\/li>\n<\/ol>"},{"title":"What is a proxy vs VPN?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/what-is-a-proxy-vs-vpn\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/what-is-a-proxy-vs-vpn\/","description":"<p>A proxy and a VPN (Virtual Private Network) both provide a means to hide your IP address and protect your privacy online, but they differ in several key ways:<\/p>\n<ol>\n<li><strong>Purpose:<\/strong> A proxy and a VPN are both designed to act as intermediaries between a client and a server and forward network requests between them. However, a VPN works on the operating system level and usually routes all of the network traffic, whereas, a proxy works at the application level and routes only a specific application's traffic.<\/li>\n<li><strong>Security:<\/strong> A proxy typically provides minimal security and encryption. The traffic going through a proxy is usually not encrypted. Whereas a VPN provides a high level of security and encryption\u00a0and protects your internet traffic from prying eyes. This means that even though your scummy ISP might be able to surveil your proxy traffic, it won't be able to pry on your VPN traffic due to its encrypted nature.<\/li>\n<li><strong>Performance:<\/strong> A proxy may be faster than a VPN because it does not need to encrypt and decrypt data. However, with the improvements in the speed and performance of systems and networks, this difference is slowly vanishing.<\/li>\n<li><strong>Cost:<\/strong> Proxies can be free or low-cost, while VPNs can be a bit more expensive. This makes proxies a better option for tasks like web scraping where you might want to source thousands or millions of different IPs for making automated requests.<\/li>\n<\/ol>\n<p>As you can see, both a proxy and a VPN can be used to hide the IP address. And while a VPN provides a more secure and private connection, it may be slower and more expensive than a proxy. The best option for you will depend on your specific needs and the level of security required.<\/p>"},{"title":"What is a web crawler used for?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/what-is-a-web-crawler-used-for\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/what-is-a-web-crawler-used-for\/","description":"<p>A web crawler is a &quot;bot&quot; generally used by search engines to look for new websites, download their data, and index it. They power most of the popular search engines like Google, Yahoo!, and Bing. These bots are called crawlers as this is the technical term to define what they do which is automatically opening a website and obtaining its data.<\/p>\n<p>You can learn more about web crawlers from <a href=\"https:\/\/en.wikipedia.org\/wiki\/Web_crawler\" target=\"_blank\" >Wikipedia<\/a>.<\/p>"},{"title":"Which is better Scrapy or BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/which-is-better-scrapy-or-beautifulsoup\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/which-is-better-scrapy-or-beautifulsoup\/","description":"<p>It is hard to say whether Scrapy is better or BeautifulSoup as both of them are complementary to each other and do different things.<\/p>\n<p>Scrapy is a robust, feature-complete, extensible, and maintained web scraping framework. It contains advanced features like rate-limiting, proxy rotation, automated URL discovery, pause\/resume crawling functionality, remote control, and multiple output formats.<\/p>\n<p>BeautifulSoup on the other hand is simply an HTML parsing library. You can couple BeautifulSoup with Scrapy to parse HTML responses using BeautifulSoup in Scrapy callbacks. You can follow <a href=\"https:\/\/docs.scrapy.org\/en\/latest\/faq.html#faq-scrapy-bs-cmp\" target=\"_blank\" >this guide<\/a> to learn more about this.<\/p>"},{"title":"Which is faster IPv4 or IPv6?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/which-is-faster-ipv4-or-ipv6\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/proxy\/which-is-faster-ipv4-or-ipv6\/","description":"<p>In theory, IPv6 is faster than IPv4 as it uses an efficient routing algorithm and gets rid of the necessity of NAT (Network Address Translation). At the same time, it also eliminates the need for IP-level fragmentation, which is required in IPv4 networks, and has a simpler header format that reduces the processing overhead required to handle network packets. However, in practice, these speed improvements may not always be realized due to certain reasons.<\/p>"},{"title":"Why is HTTPS not used for all web traffic?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/http\/why-is-https-not-used-for-all-web-traffic\/","pubDate":"Mon, 06 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/http\/why-is-https-not-used-for-all-web-traffic\/","description":"<p>There are several reasons why HTTPS is not used for all web traffic:<\/p>\n<ol>\n<li><strong>Cost:<\/strong> Implementing HTTPS requires an SSL or TLS certificate, which can be expensive for some organizations. Smaller websites may not have the budget to purchase and maintain a certificate. However, this is less of a concern now as Let's Encrypt and similar websites offer free SSL certificates.<\/li>\n<li><strong>Lack of Awareness:<\/strong> Some website owners and developers may not fully understand the importance of using HTTPS, or may not realize that their website is not currently using HTTPS. However, with Google and other search engines penalizing HTTP-only websites in their search results, the awareness would eventually improve with time.<\/li>\n<li><strong>Legacy Systems:<\/strong> Some older websites and systems may not be able to support HTTPS due to technical limitations. On top of it, implementing HTTPS can be technically complex, especially for older websites that were not originally designed with security in mind. This can make the transition to HTTPS difficult and time-consuming.<\/li>\n<\/ol>\n<p>In recent years, there has been a push to increase the use of HTTPS across the web, and many browsers now display security warnings for websites that are not using HTTPS. For example, this is how an HTTP-only website shows on Google Chrome:<\/p>"},{"title":"Are Python requests deprecated?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/are-python-requests-deprecated\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/are-python-requests-deprecated\/","description":"<p><a href=\"https:\/\/github.com\/psf\/requests\" target=\"_blank\" >Requests<\/a> is an HTTP library for Python-based programs. It is under active development and not deprecated at all. While writing this answer, the latest release of Requests was in January 2023. Around 1.8 million+ repositories <a href=\"https:\/\/github.com\/psf\/requests\/network\/dependents?package_id=UGFja2FnZS01NzA4OTExNg%3D%3D\" target=\"_blank\" >depend on this project<\/a> so the chances of Requests being deprecated are very slim. Its maintenance and further development falls under the umbrella of the Python Software Foundation. There are alternatives like the <a href=\"https:\/\/github.com\/encode\/httpx\/\" target=\"_blank\" >httpx<\/a> project but their existence does not mean that the original Requests project is dead or deprecated.<\/p>"},{"title":"Is requests a built-in Python library?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/is-requests-a-built-in-python-library\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/is-requests-a-built-in-python-library\/","description":"<p><a href=\"https:\/\/github.com\/psf\/requests\" target=\"_blank\" >Requests<\/a> is not a built-in Python library. It is available on PyPI and can be installed using the typical PIP command like this:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>$ python -m pip install requests\n<\/span><\/span><\/code><\/pre><\/div><p>It officially supports Python 3.7+ so you need to make sure your project is either using Python 3.7 or above in order to use Requests.<\/p>\n<p>You can learn more about this library on the <a href=\"https:\/\/requests.readthedocs.io\/\" target=\"_blank\" >official website<\/a> or <a href=\"https:\/\/github.com\/psf\/requests\" target=\"_blank\" >GitHub page<\/a>.<\/p>"},{"title":"What is Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/what-is-puppeteer\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/what-is-puppeteer\/","description":"<p>Puppeteer is a browser automation library developed by the Chrome Dev Tools team.<\/p>\n<p>Simply put, it is a tool that allows you to control your web browser with NodeJS scripts.<\/p>\n<p>In more technical terms it supports automating Chrome\/Chromium over the non-standard DevTools Protocol.<\/p>\n<p>There is experimental Firefox support as well.<\/p>\n<p>You can do almost anything with Puppeteer that you normally do manually. According to the <a href=\"https:\/\/pptr.dev\/\" target=\"_blank\" >official website<\/a>, this list of possible actions includes:<\/p>"},{"title":"What is requests used for in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/what-is-requests-used-for-in-python\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/what-is-requests-used-for-in-python\/","description":"<p><a href=\"https:\/\/github.com\/psf\/requests\" target=\"_blank\" >Requests<\/a> is an HTTP library for Python-based programs. It is one of the most downloaded Python packages. It provides a nice API for making HTTP requests.<\/p>\n<p>Requests is popular because it is very simple to use compared to HTTP libraries like <a href=\"https:\/\/docs.python.org\/3\/library\/urllib.html\" target=\"_blank\" >urllib<\/a> and <a href=\"https:\/\/docs.python.org\/3\/library\/urllib.request.html\" target=\"_blank\" >urllib2<\/a>.<\/p>\n<p>You can use it to make GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH requests. It also supports HTTP Basic\/Digest Authentication, Cookies, Redirects, and more.<\/p>\n<p>While being the most popular by far, Requests is lacking some modern feature that other HTTP libraries like https have like Async and HTTP\/2 support.<\/p>"},{"title":"Which is better Playwright or Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/which-is-better-playwright-or-puppeteer\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/which-is-better-playwright-or-puppeteer\/","description":"<p>Playwright and Puppeteer are both browser automation tools and libraries. They are both mature and contain all the necessary features for browser automation. There is no clear answer as to which library you should use. However, there are a few significant differences between both that might help you decide which one might suit you better.<\/p>\n<h2 id=\"puppeteer\">Puppeteer<\/h2>\n<ul>\n<li>Developed by Chrome Dev Team in 2017<\/li>\n<li>Puppeteer officially only supports Javascript. There is an unofficial port <a href=\"https:\/\/github.com\/pyppeteer\/pyppeteer\" target=\"_blank\" >in Python<\/a> but that's it.<\/li>\n<li>Fully supports Chromium along with experimental Firefox support<\/li>\n<\/ul>\n<h2 id=\"playwright\">Playwright<\/h2>\n<ul>\n<li>Developed by Microsoft and released in 2020<\/li>\n<li>Supports Golang, Python, Java, JavaScript, and C#<\/li>\n<li>Supports Chromium, Firefox, and WebKit<\/li>\n<\/ul>\n<p>Playwright is more recent so there is a smaller community as compared to Puppeteer. Look at this NPM popularity graph to decide which one is more popular:<\/p>"},{"title":"Who owns Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/who-owns-puppeteer\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/who-owns-puppeteer\/","description":"<p>Puppeteer is owned by Google and is developed as an open-source project with contributions from developers from all over the world. Puppeteer was initially developed and released by the Chrome DevTools team in 2017 and the current development takes place <a href=\"https:\/\/github.com\/puppeteer\/puppeteer\" target=\"_blank\" >on GitHub<\/a>. Most of the individual contributors are not affiliated with Google. However, the project still falls under Google's umbrella and the contributors have to sign a one-time Contributor License Agreement before they can contribute.<\/p>"},{"title":"Why do we need Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/why-do-we-need-puppeteer\/","pubDate":"Fri, 03 Feb 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/why-do-we-need-puppeteer\/","description":"<p>Puppeteer is a Node.js library used for automating web page interactions. It provides a high-level API to control Chrome or Chromium-based browsers, enabling developers to automate browser tasks, generate screenshots and PDFs, crawl web pages, and perform end-to-end testing. This library becomes extremely useful when doing web scraping as it allows you to execute website JavaScript and even hide the fact that you are using a browser automation library via <a href=\"https:\/\/www.npmjs.com\/package\/puppeteer-extra-plugin-stealth\" target=\"_blank\" >puppeteer-extra-plugin-stealth<\/a> and similar plugins.<\/p>"},{"title":"403 status code - what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/403-status-code-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/403-status-code-what-it-is-and-how-to-avoid-it\/","description":"<p>A 403 status code refers to the Forbidden response status. It is thrown by the server when it recognizes the request as being valid but is not willing to fulfil it. It might be caused by a lack of proper headers in your request so make sure you are passing all the required CORS\/JWT\/Authentication headers that the server is expecting.<\/p>\n<p>However, if the website is normally accessible and sending proper headers is still not making it work, your requests might be getting recognized by the server as being automated. In such a scenario, make sure you are using <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> or a similar tool and pair it up with proxies from a reliable proxy provider like ScrapingBee. Or better yet, use <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >ScrapingBee's web scraping API<\/a> and let us handle the task of not getting blocked. This should help solve the issue.<\/p>"},{"title":"429 status code - what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/429-status-code-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/429-status-code-what-it-is-and-how-to-avoid-it\/","description":"<p>A 429 status code refers to the <code>Too Many Requests<\/code> error. It might be thrown by the server if the user has made excessive requests in a short amount of time and the server is using rate-limiting. The best way to avoid this error is to do either of these two things:<\/p>\n<ol>\n<li>Throttle your requests. Make sure you are making only a few requests in a given timeframe so as not to hit the rate-limit<\/li>\n<li>Distribute your requests across proxies so that they all go from different IPs and don't trigger the rate-limit<\/li>\n<\/ol>\n<p>For the second option, you can use ScrapingBee's reliable proxies to make sure they aren't part of any blocklist. Or better yet, use <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >ScrapingBee's web scraping API<\/a> and let us handle the task of not getting blocked.<\/p>"},{"title":"444 status code - what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/444-status-code-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/444-status-code-what-it-is-and-how-to-avoid-it\/","description":"<p>A 444 status code is thrown when a website unexpectedly closes the connection without sending any response to the client. It is an unofficial code and specific to NGINX. There are multiple reasons why NGINX might throw this error. It might occur when the server has identified your requests to be automated. The best way to avoid it is to make every effort to conceal your automated requests and make them resemble a regular user's browsing pattern. You can use <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> and pair it up with proxies from a reliable proxy provider like ScrapingBee. Or better yet, use <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >ScrapingBee's web scraping API<\/a> and let us handle the task of not getting blocked.<\/p>"},{"title":"499 status code - what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/499-status-code-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/499-status-code-what-it-is-and-how-to-avoid-it\/","description":"<p>A 499 status code refers to &quot;client closed request&quot; error. This is a client-side code where the client did not wait long enough for the server to respond. It generally occurs in reverse proxy setups where NGINX is acting as a reverse proxy for a UWSGI or similar upstream server and did not wait long enough for the server to return the response.<\/p>\n<p>If the website is working fine under normal settings then the chances are that your requests might be getting identified as being automated. In such a scenario, make sure you are using <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> or a similar tool and pairing it up with proxies from a reliable proxy provider like ScrapingBee. Or better yet, use <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >ScrapingBee's web scraping API<\/a> and let us handle the task of not getting blocked. This should help solve the issue.<\/p>"},{"title":"503 status code - what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/503-status-code-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/503-status-code-what-it-is-and-how-to-avoid-it\/","description":"<p>A 503 status code refers to the Service Unavailable error. This might be thrown by a web server when it is not ready to serve any requests at the moment. This status code also means that there aren't any issues with the server but it is just not ready to serve your request. It might be caused by resource exhaustion or the server being down for maintainance.<\/p>\n<p>You can solve this error by figuring out if the server is actually down for maintenance or whether it is just not responding specifically to your requests. If it is the former, then waiting for a while before trying again might solve the issue. However, it it is the latter, make sure you are using <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> or a similar tool and pairing it up with proxies from a reliable proxy provider like ScrapingBee. Or better yet, use <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >ScrapingBee's web scraping API<\/a> and let us handle the task of getting around the 503 error. This should help solve the issue.<\/p>"},{"title":"520 status code - what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/520-status-code-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/520-status-code-what-it-is-and-how-to-avoid-it\/","description":"<p>A 520 status code is related to Cloudflare. It is used by Cloudflare as a catch-all response for when the origin server sends something unexpected. It might be caused by some technical issues on the website. However, it can also be caused if your requests do not contain the required data that the website is expecting. So make sure that you are including all the required headers (CORS, Referrer, Auth) in your requests.<\/p>"},{"title":"Cloudflare Error 1009: what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1009-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1009-what-it-is-and-how-to-avoid-it\/","description":"<p>Cloudflare Error 1009 refers to the Access Denied: Country or region banned error. It is thrown by Cloudflare when the website owner has banned the country or region where your IP address is originating from.<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/cloudflare-error-1009.png\" alt=\"Cloudflare Error 1009\"><\/p>\n<p>The only way to get around these errors is to use a reliable premium proxy provider like ScrapingBee that lets you manually select the proxy region as well. This way you can continue web scraping from a country or region that is not banned by the website. This should help you bypass the 1009 error.<\/p>"},{"title":"Cloudflare Error 1010: what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1010-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1010-what-it-is-and-how-to-avoid-it\/","description":"<p>Cloudflare Error 1010 means that the owner of the website has banned your access based on your browser's signature. This can happen when you are trying to scrape a website using automated tools like Selenium, Puppeteer, or Playwright. These tools are very easy to fingerprint using Javascript.<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/cloudflare-error-1010.png\" alt=\"Cloudflare Error 1010\"><\/p>\n<p>You can get around this error in two ways. One is to use tools like <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> which can not easily be fingerprinted. And another is to use web scraping APIs by companies like ScrapingBee. We use anti-fingerprinting browsers for web scraping. This makes sure our scrapers are not easily fingerprinted and banned by websites.<\/p>"},{"title":"Cloudflare Error 1015: what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1015-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1015-what-it-is-and-how-to-avoid-it\/","description":"<p>Cloudflare Error 1015 refers to the rate limiting error. It is thrown by Cloudflare when the website owner has implemented a rate limit for requests and you are violating that rate limit. This can happen when you are sending a ton of requests in a very short amount of time.<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/cloudflare-error-1015.png\" alt=\"Cloudflare Error 1015\"><\/p>\n<p>You can get around this error in two ways. One is to throttle your requests. Make sure you are only sending a limited number of requests in a given time. Another way to get around this error is to use a reliable premium proxy provider like ScrapingBee. ScrapingBee makes sure to rotate the proxies so no one proxy triggers the rate limiting. This should help you bypass the Cloudflare 1015 error.<\/p>"},{"title":"Cloudflare Error 1020: what is it and how to avoid it?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1020-what-it-is-and-how-to-avoid-it\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-error-1020-what-it-is-and-how-to-avoid-it\/","description":"<p>Cloudflare Error 1020 refers to the Access Denied error. It is thrown by Cloudflare when you violate a firewall rule set up by the Cloudflare-protected website. This violation can occur due to various reasons including sending too many requests to the website.<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/cloudflare-error-1020.png\" alt=\"Cloudflare Error 1020\"><\/p>\n<p>If the website is working fine without using automated tools then you need to improve your web scraping techniques. You can hide your automated requests by making use of <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> or a similar tool and pairing it up with premium proxies from a reliable proxy provider like ScrapingBee. Or better yet, use ScrapingBee's APIs and let us handle the task of not getting blocked. This should help you avoid the 1020 error.<\/p>"},{"title":"Cloudflare Errors 1006, 1007, 1008: how to avoid them?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-errors-1006-1007-1008-how-to-avoid-them\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/cloudflare-errors-1006-1007-1008-how-to-avoid-them\/","description":"<p>Cloudflare Errors 1006, 1007, and 1008 refer to Access Denied errors. They vary only slightly from each other. They are thrown by Cloudflare when your IP address has been banned. This generally occurs when a Cloudflare customer (the website you are trying to scrape) bans traffic originating from your IP address. They might do this when they have identified that you are trying to scrape their website.<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/cloudflare-error-1006.png\" alt=\"Cloudflare Error 1006\"><\/p>"},{"title":"How to scrape Perimeter X: Please verify you are human?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/how-to-scrape-perimeterx-verify-you-are-a-human\/","pubDate":"Tue, 24 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-scraping-blocked\/how-to-scrape-perimeterx-verify-you-are-a-human\/","description":"<p>While web scraping, you might come across PerimeterX. It is a service that helps protect websites from automated scraping. You can recognize PerimeterX by the &quot;Press &amp; Hold&quot; and &quot;Please verify you are a human&quot; messages similar to the image below:<\/p>\n<p><img src=\"https:\/\/www.scrapingbee.com\/images\/questions\/perimeterX-error.png\" alt=\"PerimeterX\"><\/p>\n<p>PerimeterX and similar anti-scraping tools rely on JavaScript fingerprinting and similar techniques which are hard to get around by using regular scraping frameworks.<\/p>\n<p>The best way to work around PerimeterX is to make sure the server does not recognize automated requests. You can hide your automated requests by making use of <a href=\"https:\/\/github.com\/ultrafunkamsterdam\/undetected-chromedriver\" target=\"_blank\" >undetected-chromedriver<\/a> or a similar tool and pairing it up with premium proxies from a reliable proxy provider like ScrapingBee. Or better yet, use <a href=\"https:\/\/www.scrapingbee.com\" target=\"_blank\" >ScrapingBee's web scraping API<\/a> and let us handle the task of not getting blocked.<\/p>"},{"title":"How to download a file using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/download-file-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/download-file-curl\/","description":"<p>To download a file using cURL you simply need to make a GET request (default behavior) and to specify the -o (output) command line option so that the response is written to a file. Here is a sample command that downloads a file from our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl https:\/\/httpbin.scrapingbee.com\/images\/png <span style=\"color:#ae81ff\">\\\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#ae81ff\"><\/span> -o image.png\n<\/span><\/span><\/code><\/pre><\/div><p>Here we ask for cURL to fetch a png image and write the result inside a file named <code>image.png<\/code>.<\/p>"},{"title":"How to follow redirect using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/follow-redirect-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/follow-redirect-curl\/","description":"<p>To follow redirect using cURL you need to use the -L option. Here is a sample command that sends a <code>GET<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a> and follows the redirect:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl -L https:\/\/httpbin.scrapingbee.com\/redirect-to?url<span style=\"color:#f92672\">=<\/span>https:\/\/httpbin.scrapingbee.com\/headers?json\n<\/span><\/span><\/code><\/pre><\/div><p>Here we ask for cURL to follow redirection, and the url we hit, redirect us to the <code>headers<\/code> endpoint. The response will be:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-json\" data-lang=\"json\"><span style=\"display:flex;\"><span>{\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;headers&#34;<\/span>: {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;Host&#34;<\/span>: <span style=\"color:#e6db74\">&#34;httpbin.scrapingbee.com&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;User-Agent&#34;<\/span>: <span style=\"color:#e6db74\">&#34;curl\/7.86.0&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;Accept&#34;<\/span>: <span style=\"color:#e6db74\">&#34;*\/*&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> }\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><\/code><\/pre><\/div><p>Now, if we remove the -L option, cURL no longer follow redirection the response will be:<\/p>"},{"title":"How to get file type of an URL in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/how-to-get-file-type-of-url-in-python\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/how-to-get-file-type-of-url-in-python\/","description":"<p>You can get the file type of a URL in Python via two different methods.<\/p>\n<ol>\n<li>Use the <code>mimetypes<\/code> module<\/li>\n<\/ol>\n<p><code>mimetypes<\/code> module comes by default with Python and can infer the file type from the URL. This relies on the file extension being present in the URL. Here is some sample code:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> mimetypes\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>mimetypes<span style=\"color:#f92672\">.<\/span>guess_type(<span style=\"color:#e6db74\">&#34;http:\/\/example.com\/file.pdf&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Output: (&#39;application\/pdf&#39;, None)<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>mimetypes<span style=\"color:#f92672\">.<\/span>guess_type(<span style=\"color:#e6db74\">&#34;http:\/\/example.com\/file&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Output: (None, None)<\/span>\n<\/span><\/span><\/code><\/pre><\/div><ol start=\"2\">\n<li>Perform a HEAD request to the URL and investigate the response headers<\/li>\n<\/ol>\n<p>A head request does not download the whole response but rather makes a short request to a URL to get some metadata. An important piece of information that it provides is the <code>Content-Type<\/code> of the response. This can give you a very good idea of the file type of a URL. Here is some sample code for making a HEAD request and figuring out the file type:<\/p>"},{"title":"How to get JSON with cURL ?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/get-json-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/get-json-curl\/","description":"<p>You can get JSON with cURL by sending a GET request with the -H &quot;Accept: application\/json&quot; option. Here is a sample command that sends a GET request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/anything?json\" target=\"_blank\" >HTTPBin<\/a> and returns the response in JSON format:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl https:\/\/httpbin.scrapingbee.com\/anything?json <span style=\"color:#ae81ff\">\\\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#ae81ff\"><\/span> -H <span style=\"color:#e6db74\">&#34;Accept: application\/json&#34;<\/span> \n<\/span><\/span><\/code><\/pre><\/div><p>It is quite simple because <code>GET<\/code> is the default request method used by cURL.<\/p>\n<p>Also, in many cases, you won't have to specify the <code>Accept<\/code> header because the server will return JSON by default.<\/p>"},{"title":"How to get XML with cURL ?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/get-xml-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/get-xml-curl\/","description":"<p>You can get XML with cURL by sending a GET request with the -H &quot;Accept: application\/xml&quot; option. Here is a sample command that sends a GET request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/xml\" target=\"_blank\" >HTTPBin<\/a> and returns the response in JSON format:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl https:\/\/httpbin.scrapingbee.com\/xml <span style=\"color:#ae81ff\">\\\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#ae81ff\"><\/span> -H <span style=\"color:#e6db74\">&#34;Accept: application\/xml&#34;<\/span> \n<\/span><\/span><\/code><\/pre><\/div><p>It is quite simple because <code>GET<\/code> is the default request method used by cURL.<\/p>\n<h2 id=\"what-is-curl\">What is cURL?<\/h2>\n<p>cURL is an open-source command-line tool used to transfer data to and from a server. It is extremely versatile and supports various protocols including HTTP, FTP, SMTP, and many others. It is generally used to test and interact with APIs, download files, and perform various other tasks involving network communication.<\/p>"},{"title":"How to ignore invalid and self-signed certificates using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/ignore-invalid-certificate-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/ignore-invalid-certificate-curl\/","description":"<p>To ignore invalid and self-signed certificates using cURL you need to use the -k option. Here is a sample command that sends a <code>GET<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a> with the -k option:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl -k https:\/\/httpbin.scrapingbee.com\n<\/span><\/span><\/code><\/pre><\/div><p>Be careful, ignoring invalid and self-signed certificates is a security risk and should only be used for testing purposes. In production, you should always use valid certificates as accepting invalid ones mean that you will be vulnerable to man-in-the-middle attacks.<\/p>"},{"title":"How to ignore non-HTML URLs when web crawling?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/how-to-ignore-non-html-urls-when-web-crawling\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/web-crawling\/how-to-ignore-non-html-urls-when-web-crawling\/","description":"<p>You can ignore non-HTML URLs when web crawling via two methods.<\/p>\n<ol>\n<li>Check the URL suffix for unwanted file extensions<\/li>\n<\/ol>\n<p>Here is some sample code that filters out image file URLs based on extension:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> os\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>IMAGE_EXTENSIONS <span style=\"color:#f92672\">=<\/span> [\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;mng&#39;<\/span>, <span style=\"color:#e6db74\">&#39;pct&#39;<\/span>, <span style=\"color:#e6db74\">&#39;bmp&#39;<\/span>, <span style=\"color:#e6db74\">&#39;gif&#39;<\/span>, <span style=\"color:#e6db74\">&#39;jpg&#39;<\/span>, \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;jpeg&#39;<\/span>, <span style=\"color:#e6db74\">&#39;png&#39;<\/span>, <span style=\"color:#e6db74\">&#39;pst&#39;<\/span>, <span style=\"color:#e6db74\">&#39;psp&#39;<\/span>, <span style=\"color:#e6db74\">&#39;tif&#39;<\/span>, \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;tiff&#39;<\/span>, <span style=\"color:#e6db74\">&#39;ai&#39;<\/span>, <span style=\"color:#e6db74\">&#39;drw&#39;<\/span>, <span style=\"color:#e6db74\">&#39;dxf&#39;<\/span>, <span style=\"color:#e6db74\">&#39;eps&#39;<\/span>, \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#e6db74\">&#39;ps&#39;<\/span>, <span style=\"color:#e6db74\">&#39;svg&#39;<\/span>, <span style=\"color:#e6db74\">&#39;cdr&#39;<\/span>, <span style=\"color:#e6db74\">&#39;ico&#39;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span>]\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>url <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com\/logo.png&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">if<\/span> os<span style=\"color:#f92672\">.<\/span>path<span style=\"color:#f92672\">.<\/span>splitext(url)[<span style=\"color:#f92672\">-<\/span><span style=\"color:#ae81ff\">1<\/span>][<span style=\"color:#ae81ff\">1<\/span>:] <span style=\"color:#f92672\">in<\/span> IMAGE_EXTENSIONS:\n<\/span><\/span><span style=\"display:flex;\"><span> print(<span style=\"color:#e6db74\">&#34;Abort the request&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">else<\/span>:\n<\/span><\/span><span style=\"display:flex;\"><span> print(<span style=\"color:#e6db74\">&#34;Continue the request&#34;<\/span>)\n<\/span><\/span><\/code><\/pre><\/div><ol start=\"2\">\n<li>Perform a HEAD request to the URL and investigate the response headers<\/li>\n<\/ol>\n<p>A head request does not download the whole response but rather makes a short request to a URL to get some metadata. An important piece of information that it provides is the <code>Content-Type<\/code> of the response. This can give you a very good idea of the file type of a URL. If the HEAD request returns a non-HTML <code>Content-Type<\/code> then you can skip the complete request. Here is some sample code for making a HEAD request and figuring out the response type:<\/p>"},{"title":"How to parse dynamic CSS classes when web scraping?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/data-parsing\/how-to-parse-dynamic-css-class-when-scraping\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/data-parsing\/how-to-parse-dynamic-css-class-when-scraping\/","description":"<p>You can parse dynamic CSS classes using text-based XPath matching. Here is a short example of what HTML with dynamic CSS classes might look like:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-html\" data-lang=\"html\"><span style=\"display:flex;\"><span>&lt;<span style=\"color:#f92672\">div<\/span> <span style=\"color:#a6e22e\">class<\/span><span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#34;dd&#34;<\/span>&gt;\n<\/span><\/span><span style=\"display:flex;\"><span> &lt;<span style=\"color:#f92672\">h1<\/span> <span style=\"color:#a6e22e\">class<\/span><span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#34;aa&#34;<\/span>&gt;Product Details&lt;\/<span style=\"color:#f92672\">h1<\/span>&gt;\n<\/span><\/span><span style=\"display:flex;\"><span> &lt;<span style=\"color:#f92672\">div<\/span> <span style=\"color:#a6e22e\">class<\/span><span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#34;ffa&#34;<\/span>&gt;\n<\/span><\/span><span style=\"display:flex;\"><span> &lt;<span style=\"color:#f92672\">div<\/span> <span style=\"color:#a6e22e\">class<\/span><span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#34;la&#34;<\/span>&gt;Remaining Stock&lt;\/<span style=\"color:#f92672\">div<\/span>&gt;\n<\/span><\/span><span style=\"display:flex;\"><span> &lt;<span style=\"color:#f92672\">div<\/span> <span style=\"color:#a6e22e\">class<\/span><span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#34;ad&#34;<\/span>&gt;5&lt;\/<span style=\"color:#f92672\">div<\/span>&gt;\n<\/span><\/span><span style=\"display:flex;\"><span> &lt;\/<span style=\"color:#f92672\">div<\/span>&gt;\n<\/span><\/span><span style=\"display:flex;\"><span>&lt;\/<span style=\"color:#f92672\">div<\/span>&gt;\n<\/span><\/span><\/code><\/pre><\/div><p>If you want to extract the value of the remaining stock you can target the HTML <code>div<\/code> tag that contains &quot;Remaining Stock&quot; and then select the sibling <code>div<\/code> that contains the stock count. You can do so using text-based XPath matching like this:<\/p>"},{"title":"How to POST JSON using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/post-json-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/post-json-curl\/","description":"<p>You can send JSON with a <code>POST<\/code> request using cURL using the -X option with POST and the -d option (data).<\/p>\n<p>Here is a sample command that sends a <code>POST<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a> with JSON data:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl -X POST https:\/\/httpbin.scrapingbee.com\/post <span style=\"color:#ae81ff\">\\\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#ae81ff\"><\/span> -d <span style=\"color:#e6db74\">&#39;{&#34;name&#34;:&#34;John Doe&#34;,&#34;age&#34;:30,&#34;city&#34;:&#34;New York&#34;}&#39;<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>Note that your JSON data must be enclosed in single quotes.<\/p>\n<h2 id=\"what-is-curl\">What is cURL?<\/h2>\n<p>cURL is an open-source command-line tool used to transfer data to and from a server. It is extremely versatile and supports various protocols including HTTP, FTP, SMTP, and many others. It is generally used to test and interact with APIs, download files, and perform various other tasks involving network communication.<\/p>"},{"title":"How to select elements by class in XPath?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/xpath\/how-to-select-elements-by-class-in-xpath\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/xpath\/how-to-select-elements-by-class-in-xpath\/","description":"<p>You can select elements by class in XPath by using the <code>contains(@class, &quot;class-name&quot;)<\/code> or <code>@class=&quot;class-name&quot;<\/code> expressions.<\/p>\n<p>The first expression will match any element\u00a0that contains <code>class-name<\/code>. Even if the element has additional classes defined it will still match. However, the second expression will match the elements that only have one class named <code>class-name<\/code> and no additional classes.<\/p>\n<p>Here is some Selenium XPath sample code that extracts the <code>h1<\/code> tag from the ScrapingBee website using the class name:<\/p>"},{"title":"How to select elements by text in XPath?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/xpath\/how-to-select-elements-by-text-in-xpath\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/xpath\/how-to-select-elements-by-text-in-xpath\/","description":"<p>Do you need to <strong>grab elements by text using XPath<\/strong>? Well, today we're going to discuss just that. Our tutorial keeps things simple: exact matches with <code>text() = '...'<\/code>, partial matches with <code>contains()<\/code>, plus <code>starts-with()<\/code> and <code>normalize-space()<\/code> to avoid whitespace-related issues. You'll learn about case sensitivity, special characters, and how text matching differs for attributes vs. inner text. Of course, this article also includes copy-pasteable examples for Python\/lxml and Selenium.<\/p>"},{"title":"How to send a DELETE request using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/send-delete-request-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/send-delete-request-curl\/","description":"<p>You can send a <code>DELETE<\/code> request using cURL via the following command:<\/p>\n<pre tabindex=\"0\"><code>curl -X DELETE &lt;url&gt;\n<\/code><\/pre><p>Where:<\/p>\n<ul>\n<li><code>-X<\/code> flag is used to define the request method that cURL should use. By default cURL sends a GET request.<\/li>\n<\/ul>\n<p>Replace <code>&lt;url&gt;<\/code> with the URL of the resource you want to delete. Here is a sample command that sends a <code>DELETE<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a>:<\/p>\n<pre tabindex=\"0\"><code>$ curl -X DELETE &#34;https:\/\/httpbin.scrapingbee.com\/delete&#34;\n<\/code><\/pre><h2 id=\"what-is-curl\">What is cURL?<\/h2>\n<p>cURL is an open-source command-line tool used to transfer data to and from a server. It is extremely versatile and supports various protocols including HTTP, FTP, SMTP, and many others. It is generally used to test and interact with APIs, download files, and perform various other tasks involving network communication.<\/p>"},{"title":"How to send a GET request using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/send-get-request-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/send-get-request-curl\/","description":"<p>You can send a <code>GET<\/code> request using cURL via the following command:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl &lt;url&gt;\n<\/span><\/span><\/code><\/pre><\/div><p>It is quite simple because <code>GET<\/code> is the default request method used by cURL.<\/p>\n<p>Replace <code>&lt;url&gt;<\/code> with the URL of the resource you want to delete. Here is a sample command that sends a <code>GET<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a>:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>$ curl <span style=\"color:#e6db74\">&#34;https:\/\/httpbin.scrapingbee.com\/anything?json&#34;<\/span>\n<\/span><\/span><\/code><\/pre><\/div><h2 id=\"what-is-curl\">What is cURL?<\/h2>\n<p>cURL is an open-source command-line tool used to transfer data to and from a server. It is extremely versatile and supports various protocols including HTTP, FTP, SMTP, and many others. It is generally used to test and interact with APIs, download files, and perform various other tasks involving network communication.<\/p>"},{"title":"How to send Basic Auth credentials using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/basic-auth-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/basic-auth-curl\/","description":"<p>To send Basic Auth credentials using cURL you need to use the -u option with &quot;login:password&quot; where &quot;login&quot; and &quot;password&quot; are your credentials.<\/p>\n<p>Here is a sample command that sends a <code>GET<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a> with Basic Auth credentials:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl https:\/\/httpbin.scrapingbee.com\/basic-auth\/login\/password <span style=\"color:#ae81ff\">\\\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#ae81ff\"><\/span> -u <span style=\"color:#e6db74\">&#34;login:password&#34;<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>When using this method, the credentials are sent in plain text, if used over HTTP, so it is not recommended to use it in production.<\/p>"},{"title":"How to send HTTP header using cURL?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/http-header-curl\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/curl\/http-header-curl\/","description":"<p>To send HTTP header using cURL you just have to use the -H command line option with the header name and value. Here is a sample command that sends a <code>GET<\/code> request to our hosted version of <a href=\"https:\/\/httpbin.scrapingbee.com\/\" target=\"_blank\" >HTTPBin<\/a> with a custom HTTP header:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl https:\/\/httpbin.scrapingbee.com\/headers?json <span style=\"color:#ae81ff\">\\\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#ae81ff\"><\/span> -H <span style=\"color:#e6db74\">&#34;custom-header: custom-value&#34;<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>And since this particular URL returns the headers sent to the server in JSON format, the response will be:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-json\" data-lang=\"json\"><span style=\"display:flex;\"><span>{\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;headers&#34;<\/span>: {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;Custom-Header&#34;<\/span>: <span style=\"color:#e6db74\">&#34;custom-value&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;Host&#34;<\/span>: <span style=\"color:#e6db74\">&#34;httpbin.scrapingbee.com&#34;<\/span>,\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#f92672\">&#34;User-Agent&#34;<\/span>: <span style=\"color:#e6db74\">&#34;curl\/7.86.0&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> }\n<\/span><\/span><span style=\"display:flex;\"><span>}\n<\/span><\/span><\/code><\/pre><\/div><p>You can also pass several headers by using the -H option multiple times:<\/p>"},{"title":"How to turn HTML to text in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/data-parsing\/how-to-turn-html-to-text-in-python\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/data-parsing\/how-to-turn-html-to-text-in-python\/","description":"<p>You can easily extract text from an HTML page using any of the famous HTML parsing libraries in Python. Here is an example of extracting text using BeautifulSoup's <code>get_text()<\/code> method:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> bs4 <span style=\"color:#f92672\">import<\/span> BeautifulSoup\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>soup <span style=\"color:#f92672\">=<\/span> BeautifulSoup(<span style=\"color:#e6db74\">&#34;&#34;&#34;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;body&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;h1 class=&#34;product&#34;&gt;Product Details&lt;\/h1&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div class=&#34;details&#34;&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;Remaining Stock&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;div&gt;5&lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\"> &lt;\/div&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&lt;\/body&gt;\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#e6db74\">&#34;&#34;&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>body <span style=\"color:#f92672\">=<\/span> soup<span style=\"color:#f92672\">.<\/span>find(<span style=\"color:#e6db74\">&#39;body&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>body_text <span style=\"color:#f92672\">=<\/span> body<span style=\"color:#f92672\">.<\/span>get_text()\n<\/span><\/span><span style=\"display:flex;\"><span>print(body_text)\n<\/span><\/span><\/code><\/pre><\/div><p>It will produce the following output:<\/p>\n<pre tabindex=\"0\"><code>\nProduct Details\n\nRemaining Stock\n5\n<\/code><\/pre><p>Selenium also offers something similar. You can use the <code>.text<\/code> property of an <code>HTMLElement<\/code> to extract text from it.<\/p>"},{"title":"How to use XPath selectors in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/xpath\/how-to-use-xpath-selectors-in-python\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/xpath\/how-to-use-xpath-selectors-in-python\/","description":"<p>There are multiple ways for using XPath selectors in Python. One popular option is to use <code>lxml<\/code> and <code>BeautifulSoup<\/code> and pair it with <code>requests<\/code>. And the second option is to use Selenium.<\/p>\n<p>Here is some sample code for using lxml, BeautifulSoup, and Requests for opening up the ScrapingBee homepage and extracting the text from <code>h1<\/code> tag using XPath:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> requests\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> lxml <span style=\"color:#f92672\">import<\/span> etree\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> bs4 <span style=\"color:#f92672\">import<\/span> BeautifulSoup\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>html <span style=\"color:#f92672\">=<\/span> requests<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>soup <span style=\"color:#f92672\">=<\/span> BeautifulSoup(html<span style=\"color:#f92672\">.<\/span>text, <span style=\"color:#e6db74\">&#34;html.parser&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>dom <span style=\"color:#f92672\">=<\/span> etree<span style=\"color:#f92672\">.<\/span>HTML(str(soup))\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>first_h1_text <span style=\"color:#f92672\">=<\/span> dom<span style=\"color:#f92672\">.<\/span>xpath(<span style=\"color:#e6db74\">&#39;\/\/h1&#39;<\/span>)[<span style=\"color:#ae81ff\">0<\/span>]<span style=\"color:#f92672\">.<\/span>text\n<\/span><\/span><span style=\"display:flex;\"><span>print(first_h1_text)\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Output: Tired of getting blocked while scraping the web?<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>Here is some sample code for doing the same with Selenium:<\/p>"},{"title":"Scraper doesn't see the data I see in the browser - why?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/data-parsing\/scraper-doesnt-see-the-data-i-see\/","pubDate":"Thu, 19 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/data-parsing\/scraper-doesnt-see-the-data-i-see\/","description":"<p>This issue can often show up when you are using an HTML parser like BeautifulSoup or lxml instead of a browser engine via Selenium or Puppeteer. The data you are seeing in the browser might be getting generated via client-side JavaScript after the page load. BeautifulSoup, lxml, and similar HTML parsing libraries do not execute JavaScript.<\/p>\n<p>There are two options to solve this issue:<\/p>\n<ol>\n<li>Use a browser automation framework like Selenium or Puppeteer and execute the JavaScript before attempting data extraction<\/li>\n<li>Search for required data in the <code>&lt;script&gt;<\/code> tags. Most of the time, the required data is hidden inside <code>&lt;script&gt;<\/code> tags as JavaScript variables and then rendered on the page after the page load<\/li>\n<\/ol>"},{"title":"How to find HTML elements by class?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/css_selectors\/how-to-find-html-elements-by-class\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/css_selectors\/how-to-find-html-elements-by-class\/","description":"<p>You can find HTML elements by class\u00a0via multiple ways in Python. The method you choose will depend on the library you are using. Some of the most famous libraries that allow selecting HTML elements by class are <code>BeautifulSoup<\/code> and <code>Selenium<\/code>.<\/p>\n<p>You can use the <code>find<\/code> or <code>find_all<\/code> methods of BeautifulSoup and pass in a <code>class_<\/code> argument to match elements with a particular class. This is how it will look like:<\/p>"},{"title":"How to fix ConnectTimeout error in Python requests?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-connecttimeout-error-in-python-requests\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-connecttimeout-error-in-python-requests\/","description":"<p><code>ConnectTimeout<\/code> occurs when the website you are trying to connect to doesn't respond to your connect request in time. You can simulate this error for a website by using a custom connect timeout in your <code>request.get()<\/code> call:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> requests\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Timeout is in seconds<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>connect_timeout <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">0.1<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>read_timeout <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">10<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>response <span style=\"color:#f92672\">=<\/span> requests<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com\/&#34;<\/span>, timeout<span style=\"color:#f92672\">=<\/span>(connect_timeout, read_timeout))\n<\/span><\/span><\/code><\/pre><\/div><p>If you are sure your IP is not being blocked by the website and the website is working fine, then you can fix this error by increasing the connect timeout value:<\/p>"},{"title":"How to fix MissingSchema error in Python requests?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-missingschema-error-in-python-requests\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-missingschema-error-in-python-requests\/","description":"<p><code>MissingSchema<\/code> occurs when you don't provide the complete URL to <code>requests<\/code>. This often means you skipped <code>http:\/\/<\/code> or <code>https:\/\/<\/code> and\/or provided a relative URL.<\/p>\n<p>You can fix this error by making use of the <code>urljoin<\/code> function from the <code>urllib.parse<\/code> library to join URLs before making a remote request. The solution will look something like this:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> urllib.parse <span style=\"color:#f92672\">import<\/span> urljoin\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> requests\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>url <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>relative_url <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#34;\/path\/to\/resource&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>final_url <span style=\"color:#f92672\">=<\/span> urljoin(url, relative_url)\n<\/span><\/span><span style=\"display:flex;\"><span>html <span style=\"color:#f92672\">=<\/span> requests<span style=\"color:#f92672\">.<\/span>get(final_url)\n<\/span><\/span><\/code><\/pre><\/div><p><code>urljoin<\/code> will merge two URLs only if the second argument is a relative path. For example, the following sample code will print <code>https:\/\/scrapingbee.com<\/code>:<\/p>"},{"title":"How to fix ReadTimeout error in Python requests?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-readtimeout-error-in-python-requests\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-readtimeout-error-in-python-requests\/","description":"<p><code>ReadTimeout<\/code> occurs when the website you are trying to connect to doesn't send back data in time. You can simulate this error for a website by using a custom read timeout in your <code>request.get()<\/code> call:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> requests\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Timeout is in seconds<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>connect_timeout <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">5<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>read_timeout <span style=\"color:#f92672\">=<\/span> <span style=\"color:#ae81ff\">0.1<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>response <span style=\"color:#f92672\">=<\/span> requests<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com\/&#34;<\/span>, timeout<span style=\"color:#f92672\">=<\/span>(connect_timeout, read_timeout))\n<\/span><\/span><\/code><\/pre><\/div><p>If you are sure your IP is not being blocked by the website and the website just needs more time before returning data, then you can fix this error by increasing the read timeout:<\/p>"},{"title":"How to fix SSLError in Python requests?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-ssl-error-in-python-requests\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-ssl-error-in-python-requests\/","description":"<p><code>SSLError<\/code> occurs when you request a remote URL that does not provide a trusted SSL certificate. The easiest way to fix this issue is to disable SSL verification for that particular web address by passing in <code>verify=False<\/code> as an argument to the method calls. Just make sure you are not sending any sensitive data in your request.<\/p>\n<p>Here is some sample code that disables SSL verification:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> requests\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>response <span style=\"color:#f92672\">=<\/span> requests<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;https:\/\/example.com\/&#34;<\/span>, verify<span style=\"color:#f92672\">=<\/span><span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><\/code><\/pre><\/div><p>You can optionally provide a custom certificate for the website to fix this error as well. Here is some sample code for providing a custom <code>.pem<\/code> certificate file to <code>requests<\/code>:<\/p>"},{"title":"How to fix TooManyRedirects error in Python requests?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-toomanyredirects-error-in-python-requests\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/requests\/how-to-fix-toomanyredirects-error-in-python-requests\/","description":"<p><code>TooManyRedirects<\/code> error occurs when the request redirects continuously. By default, <code>requests<\/code> has a limit of 30 redirects. If it encounters more than 30 redirects in a row then it throws this error.<\/p>\n<p>Firstly, you should make sure that the website is not buggy. There aren't a lot of scenarios where more than 30 redirects make sense. Maybe the website is detecting your requests as automated and intentionally sending you in a redirection loop.<\/p>"},{"title":"How to select HTML elements by text using CSS Selectors?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/css_selectors\/how-to-select-html-elements-by-text-using-css-selectors\/","pubDate":"Wed, 18 Jan 2023 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/css_selectors\/how-to-select-html-elements-by-text-using-css-selectors\/","description":"<p>There used to be a way to select HTML elements by text using CSS Selectors by making use of <code>:contains(text)<\/code>. However, this has been deprecated for a long time and is no longer supported by the W3C standard. If you want to select an element by text, you should look into other options. Most Python libraries provide a way for you to do so.<\/p>\n<p>For instance, you can select an element by text using XPath Selectors in Selenium like this:<\/p>"},{"title":"What is the best framework for web scraping with Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/what-is-the-best-framework-for-web-scraping-with-python\/","pubDate":"Thu, 07 Jul 2022 09:10:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/what-is-the-best-framework-for-web-scraping-with-python\/","description":"<h2 id=\"scrapy\">Scrapy<\/h2>\n<p>Scrapy framework is a robust and complete web scraping tool that allows you to:<\/p>\n<ul>\n<li>explore a whole website from a single URL (crawling)<\/li>\n<li>rate-limit the exploration to avoid getting banned<\/li>\n<li>generates data export in CSV, JSON, and XML<\/li>\n<li>storing the data in S3, databases, etc\u00a0<\/li>\n<li>cookies and session handling<\/li>\n<li>HTTP features like compression, authentication, caching<\/li>\n<li>user-agent spoofing<\/li>\n<li>robots.txt<\/li>\n<li>crawl depth restriction<\/li>\n<li>and more<\/li>\n<\/ul>\n<p>However, this framework can be a bit hard to use, especially for beginners. If you want to learn this framework, check out our <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-scrapy\/\" >Scrapy tutorial<\/a>.<\/p>"},{"title":"Which is better for web scraping Python or JavaScript?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/which-is-better-for-web-scraping-python-or-javascript\/","pubDate":"Thu, 07 Jul 2022 09:10:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/which-is-better-for-web-scraping-python-or-javascript\/","description":"<h2 id=\"short-answer-python\">Short answer: Python!<\/h2>\n<p>Long answer: it depends.<\/p>\n<p>If you're scraping simple websites with a simple HTTP request. Python is your best bet.<\/p>\n<p>Libraries such as <code>requests<\/code> or <code>HTTPX<\/code> makes it very easy to scrape websites that don't require JavaScript to work correctly. Python offers a lot of simple-to-use <a href=\"https:\/\/www.scrapingbee.com\/blog\/best-python-http-clients\/\" >HTTP clients<\/a>.<\/p>\n<p>And once you get the response, it's also very easy to <a href=\"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-beautiful-soup\/\" >parse the HTML with BeautifulSoup<\/a> for example.\u00a0<br>\u00a0<\/p>\n<p>Here is a very quick example of how simple it is to scrape a website and extract its title:<\/p>"},{"title":"Which is better Scrapy or BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/which-is-better-scrapy-or-beautifulsoup\/","pubDate":"Mon, 04 Jul 2022 09:10:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/which-is-better-scrapy-or-beautifulsoup\/","description":"<h2 id=\"scrapy\">Scrapy<\/h2>\n<p>Scrapy is a more robust, feature-complete, more extensible, and more maintained web scraping tool.<\/p>\n<p>Scrapy allows you to crawl, extract, and store a full website. BeautilfulSoup on the other end only allows you to parse HTML and extract the information you're looking for.<\/p>\n<p>However, Scrapy is much harder to use, this is why we suggest you check out this tutorial showing you <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-scrapy\/\" >how to start with Scrapy<\/a>\u00a0if you want to use it.<\/p>"},{"title":"How do I get a title in Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-do-i-get-a-title-in-cheerio\/","pubDate":"Sat, 16 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-do-i-get-a-title-in-cheerio\/","description":"<p>You can get a title in Cheerio by using the <code>title<\/code> as the selector expression and then executing the <code>text()<\/code> method. Here is some sample code that extracts and prints the title from the ScrapingBee homepage:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#a6e22e\">fetch<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/scrapingbee.com&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> .<span style=\"color:#a6e22e\">then<\/span>(<span style=\"color:#66d9ef\">function<\/span> (<span style=\"color:#a6e22e\">response<\/span>) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">return<\/span> <span style=\"color:#a6e22e\">response<\/span>.<span style=\"color:#a6e22e\">text<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span> })\n<\/span><\/span><span style=\"display:flex;\"><span> .<span style=\"color:#a6e22e\">then<\/span>(<span style=\"color:#66d9ef\">function<\/span> (<span style=\"color:#a6e22e\">html<\/span>) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Load HTML in Cheerio\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span> \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Use `title` as a selector and extract\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#75715e\">\/\/ the text using the `text()` method\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#39;title&#39;<\/span>).<span style=\"color:#a6e22e\">text<\/span>())\n<\/span><\/span><span style=\"display:flex;\"><span> })\n<\/span><\/span><span style=\"display:flex;\"><span> .<span style=\"color:#66d9ef\">catch<\/span>(<span style=\"color:#66d9ef\">function<\/span> (<span style=\"color:#a6e22e\">err<\/span>) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#e6db74\">&#39;Failed to fetch page: &#39;<\/span>, <span style=\"color:#a6e22e\">err<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span> });\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How do I get links in Cheerio?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-do-i-get-links-in-cheerio\/","pubDate":"Sat, 16 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/how-do-i-get-links-in-cheerio\/","description":"<p>You can get links in Cheerio by using the relevant selector expression and then using the <code>.attr()<\/code> method to extract the <code>href<\/code> from the nodes.<\/p>\n<p>Here is some sample code that extracts all the anchor tags from the ScrapingBee homepage and then prints the text and <code>href<\/code> from the tags in the console:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-javascript\" data-lang=\"javascript\"><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">cheerio<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">require<\/span>(<span style=\"color:#e6db74\">&#39;cheerio&#39;<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#a6e22e\">fetch<\/span>(<span style=\"color:#e6db74\">&#39;https:\/\/scrapingbee.com&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> .<span style=\"color:#a6e22e\">then<\/span>(<span style=\"color:#66d9ef\">function<\/span> (<span style=\"color:#a6e22e\">response<\/span>) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">return<\/span> <span style=\"color:#a6e22e\">response<\/span>.<span style=\"color:#a6e22e\">text<\/span>();\n<\/span><\/span><span style=\"display:flex;\"><span> })\n<\/span><\/span><span style=\"display:flex;\"><span> .<span style=\"color:#a6e22e\">then<\/span>(<span style=\"color:#66d9ef\">function<\/span> (<span style=\"color:#a6e22e\">html<\/span>) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Load the HTML in Cheerio\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">$<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">cheerio<\/span>.<span style=\"color:#a6e22e\">load<\/span>(<span style=\"color:#a6e22e\">html<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span> \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Select all anchor tags from the page\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#66d9ef\">const<\/span> <span style=\"color:#a6e22e\">links<\/span> <span style=\"color:#f92672\">=<\/span> <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#e6db74\">&#34;a&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Loop over all the anchor tags\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#a6e22e\">links<\/span>.<span style=\"color:#a6e22e\">each<\/span>((<span style=\"color:#a6e22e\">index<\/span>, <span style=\"color:#a6e22e\">value<\/span>) =&gt; {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\">\/\/ Print the text from the tags and the associated href\n<\/span><\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"><\/span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">value<\/span>).<span style=\"color:#a6e22e\">text<\/span>(), <span style=\"color:#e6db74\">&#34; =&gt; &#34;<\/span>, <span style=\"color:#a6e22e\">$<\/span>(<span style=\"color:#a6e22e\">value<\/span>).<span style=\"color:#a6e22e\">attr<\/span>(<span style=\"color:#e6db74\">&#34;href&#34;<\/span>));\n<\/span><\/span><span style=\"display:flex;\"><span> })\n<\/span><\/span><span style=\"display:flex;\"><span> })\n<\/span><\/span><span style=\"display:flex;\"><span> .<span style=\"color:#66d9ef\">catch<\/span>(<span style=\"color:#66d9ef\">function<\/span> (<span style=\"color:#a6e22e\">err<\/span>) {\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#a6e22e\">console<\/span>.<span style=\"color:#a6e22e\">log<\/span>(<span style=\"color:#e6db74\">&#39;Failed to fetch page: &#39;<\/span>, <span style=\"color:#a6e22e\">err<\/span>);\n<\/span><\/span><span style=\"display:flex;\"><span> });\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"Is Cheerio faster than Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/is-cheerio-faster-than-puppeteer\/","pubDate":"Sat, 16 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/is-cheerio-faster-than-puppeteer\/","description":"<p>Cheerio is much faster than Puppeteer. This is because Cheerio is just a DOM parser and helps us traverse raw HTML and XML data. It does not execute any Javascript on the page. On the other hand, Puppeteer runs a full browser and executes all the Javascript, and processes all XHR requests.<\/p>\n<p>You won't be able to observe the speed difference in small projects but it compounds on large projects and becomes very apparent.<\/p>"},{"title":"What is Cheerio in JavaScript?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/what-is-cheerio-in-javascript\/","pubDate":"Sat, 16 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/cheerio\/what-is-cheerio-in-javascript\/","description":"<p>Cheerio is a fast, lean implementation of core jQuery. It helps in traversing the DOM using a friendly and familiar API and works both in the browser and the server. It simply parses the HTML and XML and does not execute any Javascript in the document or load any external resources. This makes Cheerio extremely fast when compared to full browser automation tools like Puppeteer and Selenium. However, if a project requires executing Javascript on the page or executing background XHR requests then Cheerio is not the right tool for the job.<\/p>"},{"title":"Can I use XPath selectors in BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/can-i-use-xpath-selectors-in-beautifulsoup\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/can-i-use-xpath-selectors-in-beautifulsoup\/","description":"<h2 id=\"what-is-xpath\">What is XPath?<\/h2>\n<p>XPath is an expression language designed to support the query or transformation of XML documents. It was defined by the W3C and can be used to navigate through elements and attributes in an XML document.<\/p>\n<h2 id=\"can-we-use-xpath-with-beautifulsoup\">Can we use XPath with BeautifulSoup?<\/h2>\n<p>Technically, no. But we can BeautifulSoup4 with lxml Python library to achieve that.<\/p>\n<p>To install lxml, all you have to do is run this command: <code>pip install lxml<\/code>, and that's it!<\/p>"},{"title":"How long does it take to learn web scraping in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-long-does-it-take-to-learn-web-scraping-in-python\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-long-does-it-take-to-learn-web-scraping-in-python\/","description":"<p>\u00a0<\/p>\n<p>Depending on your Python knowledge, and how much time you're allocating to learn this skill, it could take anywhere from two days to two years.<\/p>\n<p>- Generally, it takes about one to six months to learn the fundamentals of Python, that means being able to work with variables, objects &amp; data structures, flow control (conditions &amp; loops), file I\/O, functions, classes and basic web scraping tools such as <code>requests<\/code>\u200b\u200b\u200b\u200b\u200b library.<\/p>"},{"title":"How to capture background requests and responses in Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-capture-background-requests\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-capture-background-requests\/","description":"<p>You can use the page.on() function to capture the background requests and responses that go in the background when a request is made.<\/p>\n<p>For example, to capture the background requests of ScrapingBee's home page, you can use this code:<\/p>\n<pre tabindex=\"0\"><code>const puppeteer = require(&#39;puppeteer&#39;)\ntry {\n (async () =&gt; {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n var requests = [];\n var responses = [];\n\n page.on(&#39;request&#39;, request =&gt; {\n requests.push(request);\n });\n\n page.on(&#39;response&#39;, response =&gt; {\n responses.push(response);\n });\n await page.goto(&#39;https:\/\/scrapingbee.com&#39;);\n await browser.close();\n console.log(requests);\n console.log(responses)\n })()\n} catch (err) {\n console.error(err);\n}\n<\/code><\/pre>"},{"title":"How to extract data from website using selenium python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-to-extract-data-from-website-using-selenium-python\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-to-extract-data-from-website-using-selenium-python\/","description":"<p>\u00a0<\/p>\n<p>You can use Selenium to scrape data from specific elements of a web page. Let's take the same example from our previous post:\u00a0<a href=\"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-to-web-scrape-with-python-selenium\/\" target=\"_blank\" >How to web scrape with python selenium?<\/a><\/p>\n<p>We have used this Python code (with Selenium) to wait for the content to load by adding some waiting time:<\/p>\n<pre tabindex=\"0\"><code>from selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\nimport time\n\noptions = Options()\noptions.headless = True\n\ndriver = webdriver.Chrome(options=options, executable_path=&#34;PATH_TO_CHROMEDRIVER&#34;) # Setting up the Chrome driver\ndriver.get(&#34;https:\/\/demo.scrapingbee.com\/content_loads_after_5s.html&#34;)\ntime.sleep(6) # Sleep for 6 seconds\nprint(driver.page_source)\ndriver.quit()\n<\/code><\/pre><p>And we've had this result:<\/p>"},{"title":"How to find all links using BeautifulSoup and Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-all-links-using-beautifulsoup-and-python\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-all-links-using-beautifulsoup-and-python\/","description":"<p>You can find all of the links, anchor <code>&lt;a&gt;<\/code> elements, on a web page by using the <code>find_all<\/code> function of BeautifulSoup4, with the tag <code>&quot;a&quot;<\/code> as a parameter for the function.<\/p>\n<p>Here's a sample code to extract all links from ScrapingBee's blog:<\/p>\n<pre tabindex=\"0\"><code>import requests\nfrom bs4 import BeautifulSoup\n\nresponse = requests.get(&#34;https:\/\/www.scrapingbee.com\/blog\/&#34;)\nsoup = BeautifulSoup(response.content, &#39;html.parser&#39;)\n\nlinks = soup.find_all(&#34;a&#34;) # Find all elements with the tag &lt;a&gt;\nfor link in links:\n print(&#34;Link:&#34;, link.get(&#34;href&#34;), &#34;Text:&#34;, link.string)\n<\/code><\/pre>"},{"title":"How to find elements by CSS selector in Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-find-elements-by-css-selector-in-puppeteer\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-find-elements-by-css-selector-in-puppeteer\/","description":"<p>You can use Puppeteer to find elements using CSS selectors with the <code>page.$()<\/code> or <code>page.$$()<\/code> functions.<\/p>\n<p><code>page.$()<\/code> returns the first occurence of the CSS selector being used, while <code>page.$$()<\/code> returns all elements of the page that match the selector.<\/p>\n<pre tabindex=\"0\"><code>const puppeteer = require(&#39;puppeteer&#39;);\n\n(async () =&gt; {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n\n \/\/ Open Scrapingbee&#39;s website\n await page.goto(&#39;https:\/\/scrapingbee.com&#39;);\n\n \/\/ Get the first h1 element using page.$\n let first_h1 = await page.$(&#34;h1&#34;);\n\n \/\/ Get all p elements using page.$$\n let all_p_elements = await page.$$(&#34;p&#34;);\n\n \/\/ Get the textContent of the h1 element\n let h1_value = await page.evaluate(el =&gt; el.textContent, first_h1)\n\n \/\/ The total number of p elements on the page\n let p_total = await page.evaluate(el =&gt; el.length, all_p_elements)\n\n console.log(h1_value);\n\n console.log(p_total);\n\n \/\/ Close browser.\n await browser.close();\n})();\n<\/code><\/pre>"},{"title":"How to find elements by XPath in Puppeteer","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-find-elements-by-xpath-in-puppeteer\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-find-elements-by-xpath-in-puppeteer\/","description":"<p>You can also use Puppeteer to find elements with XPath instead of CSS selectors, by using the <code>page.$x()<\/code> function:<\/p>\n<pre tabindex=\"0\"><code>const puppeteer = require(&#39;puppeteer&#39;);\n\n(async () =&gt; {\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n\n \/\/ Open Scrapingbee&#39;s website\n await page.goto(&#39;https:\/\/scrapingbee.com&#39;);\n\n \/\/ Get the first h1 element using page.$x\n let first_h1_element = await page.$x(&#39;\/\/*[@id=&#34;content&#34;]\/div\/section[1]\/div\/div\/div[1]\/div\/h1&#39;);\n\n \/\/ Get all p elements using page.$x\n let all_p_elements = await page.$x(&#34;\/\/p&#34;);\n\n \/\/ Get the textContent of the h1 element\n let h1_value = await page.evaluate(el =&gt; el.textContent, first_h1_element[0])\n\n \/\/ The total number of p elements on the page\n let p_total = await page.evaluate(el =&gt; el.length, all_p_elements)\n\n console.log(h1_value);\n\n console.log(p_total);\n\n \/\/ Close browser.\n await browser.close();\n})();\n<\/code><\/pre>"},{"title":"How to find elements without specific attributes in BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-elements-without-specific-attributes-in-beautifulsoup\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-elements-without-specific-attributes-in-beautifulsoup\/","description":"<p>To find elements without a specific attribute using BeautifulSoup, we use the <code>attrs<\/code> parameter of the function <code>find<\/code>, and we specify the attributes as <code>None<\/code>.<\/p>\n<p>For example, to find the paragraph element without a class name, we set\u00a0<code>attrs={&quot;class&quot;: None}<\/code>:<\/p>\n<pre tabindex=\"0\"><code>import requests\nfrom bs4 import BeautifulSoup\n\nhtml_content = &#39;&#39;&#39;\n&lt;p class=&#34;clean-text&#34;&gt;A very long clean paragraph&lt;\/p&gt;\n&lt;p class=&#34;dark-text&#34;&gt;A very long dark paragraph&lt;\/p&gt;\n&lt;p&gt;A very long paragraph without attribute&lt;\/p&gt;\n&lt;p class=&#34;light-text&#34;&gt;A very long light paragraph&lt;\/p&gt;\n&#39;&#39;&#39;\nsoup = BeautifulSoup(html_content, &#39;html.parser&#39;)\n\nno_class_attribute = soup.find(&#34;p&#34;, attrs={&#34;class&#34;: None})\n\nprint(no_class_attribute)\n# Output: &lt;p&gt;A very long paragraph without attribute&lt;\/p&gt;\n<\/code><\/pre>"},{"title":"How to find HTML element by class with BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-html-element-by-class-with-beautifulsoup\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-html-element-by-class-with-beautifulsoup\/","description":"<p>To extract HTML elements with a specific class name using BeautifulSoup, we use the <code>attrs<\/code> parameter of the functions find or find_all.<\/p>\n<p>For example, to extract \u200b\u200b\u200b\u200b\u200bthe element that has <code>mb-[21px]<\/code> as a class name, we use the function <code>find<\/code> with\u00a0<code>attrs={&quot;class&quot;: &quot;mb-[21px]&quot;}<\/code> like this:<\/p>\n<pre tabindex=\"0\"><code>import requests\nfrom bs4 import BeautifulSoup\n\nresponse = requests.get(&#34;https:\/\/www.scrapingbee.com\/blog\/&#34;)\nsoup = BeautifulSoup(response.content, &#39;html.parser&#39;)\n\nh1 = soup.find(attrs={&#34;class&#34;: &#34;mb-[21px]&#34;})\nprint(h1.string)\n# Output: The ScrapingBee Blog\n<\/code><\/pre><p>\u00a0<\/p>"},{"title":"How to find HTML elements by attribute using BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-html-elements-by-attribute-using-beautifulsoup\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-html-elements-by-attribute-using-beautifulsoup\/","description":"<p>BeautifulSoup can also be used to scrape elements with custom attributes using the <code>attrs<\/code> parameter for the functions <code>find<\/code> and <code>find_all<\/code>.<\/p>\n<p>To extract elements with the attribute\u00a0<code>data-microtip-size=medium<\/code>, the tooltips in the pricing table from ScrapingBee's home page, we can set\u00a0<code>attrs={&quot;data-microtip-size&quot;: &quot;medium&quot;}<\/code>\u00a0<\/p>\n<pre tabindex=\"0\"><code>import requests\nfrom bs4 import BeautifulSoup\n\nresponse = requests.get(&#34;https:\/\/www.scrapingbee.com&#34;)\nsoup = BeautifulSoup(response.content, &#39;html.parser&#39;)\n\ntooltips = soup.find_all(&#34;button&#34;, attrs={&#34;data-microtip-size&#34;: &#34;medium&#34;})\nfor tooltip in tooltips:\n print(tooltip.get(&#34;aria-label&#34;))\n# Output: API credits are valid for one month, leftovers are not rolled-over to the next month... credits and concurrency.\n<\/code><\/pre><p>\u00a0<\/p>"},{"title":"How to find HTML elements by multiple tags with BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-html-elements-by-multiple-tags-with-beautifulsoup\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-html-elements-by-multiple-tags-with-beautifulsoup\/","description":"<p>BeautifulSoup also supports selecting\u00a0 elements by multiple tags. To achieve that, we use the function <code>find_all<\/code>, and we send a list of tags we want to extract.<\/p>\n<p>For example, to extract <code>&lt;h1&gt;<\/code> and <code>&lt;b&gt;<\/code> elements, we send the tags as a list like this:<\/p>\n<pre tabindex=\"0\"><code>from bs4 import BeautifulSoup\n\nhtml_content = &#39;&#39;&#39;\n&lt;h1&gt;Header&lt;\/h1&gt;\n&lt;p&gt;Paragraph&lt;\/p&gt;\n&lt;span&gt;Span&lt;\/p&gt;\n&lt;b&gt;Bold&lt;\/b&gt;\n&#39;&#39;&#39;\nsoup = BeautifulSoup(html_content, &#39;html.parser&#39;)\n\nheaders_and_bold_text = soup.find_all([&#34;h1&#34;, &#34;b&#34;])\nfor element in headers_and_bold_text:\n print(element)\n# Output:\n# &lt;h1&gt;Header&lt;\/h1&gt;\n# &lt;b&gt;Bold&lt;\/b&gt;\n<\/code><\/pre>"},{"title":"How to find sibling HTML nodes using BeautifulSoup and Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-sibling-html-nodes-using-beautifulsoup-and-python\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-sibling-html-nodes-using-beautifulsoup-and-python\/","description":"<p>BeautifulSoup allows us to find sibling elements using 4 main functions:<\/p>\n<p>-\u00a0<code>find_previous_sibling<\/code> to find the single previous sibling<br>-\u00a0<code>find_next_sibling<\/code> to find the single next sibling<br>-\u00a0<code>find_all_next<\/code> to find all the next siblings<br>-\u00a0<code>find_all_previous<\/code> to find all previous siblings<br><br>\u200b\u200b\u200b\u200b\u200b\u200b\u200bYou can use the code below to find the previous sibling, next sibling, all next siblings and all previous siblings of the Main Paragraph element:<\/p>\n<pre tabindex=\"0\"><code>from bs4 import BeautifulSoup\n\nhtml_content = &#39;&#39;&#39;\n&lt;p&gt;First paragraph&lt;\/p&gt;\n&lt;p&gt;Second Paragraph&lt;\/p&gt;\n&lt;p id=&#34;main&#34;&gt;Main Paragraph&lt;\/p&gt;\n&lt;p&gt;Fourth Paragraph&lt;\/p&gt;\n&lt;p&gt;Fifth Pragaraph&lt;\/p&gt;\n&#39;&#39;&#39;\nsoup = BeautifulSoup(html_content, &#39;html.parser&#39;)\n\nmain_element = soup.find(&#34;p&#34;, attrs={&#34;id&#34;: &#34;main&#34;})\n\n# Find the previous sibling:\nprint(main_element.find_previous_sibling())\n\n# Find the next sibling:\nprint(main_element.find_next_sibling())\n\n# Find all next siblings:\nprint(main_element.find_all_next())\n\n# Find all previous siblings:\nprint(main_element.find_all_previous())\n<\/code><\/pre>"},{"title":"How to save and load cookies in Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-save-and-load-cookies-in-puppeteer\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-save-and-load-cookies-in-puppeteer\/","description":"<p>Saving and loading cookies with Puppeteer is very straightforward, we can use the <code>page.cookies()<\/code> method to get all the cookies of a webpage, and use the <code>page.setCookie()<\/code> method to load cookies into a web page:<\/p>\n<pre tabindex=\"0\"><code>const puppeteer = require(&#39;puppeteer&#39;);\n\n(async () =&gt; {\n\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n\n \/\/ Open ScrapingBee&#39;s URL\n await page.goto(&#39;http:\/\/scrapingbee.com&#39;);\n\n \/\/ Get all the page&#39;s cookies and save them to the cookies variable\n const cookies = await page.cookies();\n\n \/\/ Open a second website\n await page.goto(&#39;http:\/\/httpbin.org\/cookies&#39;);\n\n \/\/ Load the previously saved cookies\n await page.setCookie(...cookies);\n\n \/\/ Get the second page&#39;s cookies\n const cookiesSet = await page.cookies();\n\n console.log(JSON.stringify(cookiesSet));\n\n await browser.close();\n\n})();\n<\/code><\/pre>"},{"title":"How to scrape tables with BeautifulSoup?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-elements-without-a-specific-attribute-in-beautifulsoup\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-find-elements-without-a-specific-attribute-in-beautifulsoup\/","description":"<p>We can parse a table's content with BeautifulSoup by finding all <code>&lt;tr&gt;<\/code> elements, and finding their <code>&lt;td&gt;<\/code> or <code>&lt;th&gt;<\/code> children.<\/p>\n<p>Here is an example on how to parse <a target=\"_blank\" rel=\"noopener\" href=\"https:\/\/demo.scrapingbee.com\/table_content.html\">this demo table<\/a> using BeautifulSoup:<\/p>\n<pre tabindex=\"0\"><code>import requests\nfrom bs4 import BeautifulSoup\n\nresponse = requests.get(&#34;https:\/\/demo.scrapingbee.com\/table_content.html&#34;)\nsoup = BeautifulSoup(response.content, &#39;html.parser&#39;)\n\ndata = []\ntable = soup.find(&#39;table&#39;)\ntable_body = table.find(&#39;tbody&#39;)\n\nrows = table.find_all(&#39;tr&#39;)\nfor row in rows:\n cols = row.find_all([&#39;td&#39;, &#39;th&#39;])\n cols = [ele.text.strip() for ele in cols]\n data.append([ele for ele in cols if ele])\nprint(data)\n<\/code><\/pre><p>\u00a0<\/p>"},{"title":"How to select values between two nodes in BeautifulSoup and Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-select-values-between-two-nodes-in-beautifulsoup-and-python\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/beautifulsoup\/how-to-select-values-between-two-nodes-in-beautifulsoup-and-python\/","description":"<p>You can select elements between two nodes in BeautifulSoup by looping through the main nodes, and checking the next siblings to see if a main node was reached:\u00a0<\/p>\n<pre tabindex=\"0\"><code>from bs4 import BeautifulSoup\n\nhtml_content = &#39;&#39;&#39;\n&lt;h1&gt;Starting Header&lt;\/h1&gt;&lt;p&gt;Element 1&lt;\/p&gt;&lt;p&gt;Element 2&lt;\/p&gt;&lt;p&gt;Element 3&lt;\/p&gt;&lt;h1&gt;Ending Header&lt;\/h1&gt;\n&#39;&#39;&#39;\nsoup = BeautifulSoup(html_content, &#39;html.parser&#39;)\n\nelements = []\nfor tag in soup.find(&#34;h1&#34;).next_siblings:\n if tag.name == &#34;h1&#34;:\n break\n else:\n elements.append(tag)\n\nprint(elements)\n# Output: [&lt;p&gt;Element 1&lt;\/p&gt;, &lt;p&gt;Element 2&lt;\/p&gt;, &lt;p&gt;Element 3&lt;\/p&gt;]\n<\/code><\/pre>"},{"title":"How to take a screenshot with Puppeteer?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-take-a-screenshot-with-puppeteer\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/puppeteer\/how-to-take-a-screenshot-with-puppeteer\/","description":"<p>Taking screenshots with Puppeteer is very simple, all you have to do is to set the browser's viewport, then use the <code>page.screenshot()<\/code>\u00a0method to capture it.<\/p>\n<p>Here's an example on how to take a screenshot of ScrapingBee's home page:<\/p>\n<pre tabindex=\"0\"><code>const puppeteer = require(&#39;puppeteer&#39;);\n\n(async () =&gt; {\n\n const browser = await puppeteer.launch();\n const page = await browser.newPage();\n\n \/\/ Set the viewport&#39;s width and height\n await page.setViewport({ width: 1920, height: 1080 });\n\n \/\/ Open ScrapingBee&#39;s home page\n await page.goto(&#39;https:\/\/scrapingbee.com&#39;);\n\n try {\n \/\/ Capture screenshot and save it in the current folder:\n await page.screenshot({ path: `.\/scrapingbee_homepage.jpg` });\n\n } catch (err) {\n console.log(`Error: ${err.message}`);\n } finally {\n await browser.close();\n console.log(`Screenshot has been captured successfully`);\n }\n})();\n<\/code><\/pre>"},{"title":"How to web scrape with python selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-to-web-scrape-with-python-selenium\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/how-to-web-scrape-with-python-selenium\/","description":"<p>Using Python with Requests library can help you scrape data from static websites, that means websites that have the content within the server's original HTML response. However, you will not be able to get data from websites that load information dynamically, using JavaScript that gets executed after the server's initial response. For that, we will have to use tools that allows us to mimic a typical user's behavior, like Selenium.<\/p>"},{"title":"Is Python good for web scraping?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/python-good-web-scraping\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/python-good-web-scraping\/","description":"<h2 id=\"short-answer-yes\">Short answer: Yes!<\/h2>\n<p>Python is one of the most popular programming languages in the world thanks to its ease of use &amp; learn, its large community and its portability. This language also dominates all modern data-related fields, including data analysis, machine learning and web scraping.<\/p>\n<p>Writing a Hello World program in Python is much easier than most other programming languages, especially C-Like languages, here is how you can do that:<\/p>"},{"title":"Is web scraping good to learn?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/is-web-scraping-good-to-learn\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/is-web-scraping-good-to-learn\/","description":"<h2 id=\"yes\">Yes!<\/h2>\n<p>Web scraping is a very useful skill to have in a world that operates with, and generates data in every second. Data is everywhere, and it is important to acquire the ability to easily extract it from online sources.<\/p>\n<p><br>Without web scraping knowledge, it would very difficult to amass large amounts of data that can be used for analysis, visualization and prediction.<br>For example, without tools like <a href=\"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/python-good-web-scraping\/\" target=\"_blank\" >Requests<\/a> and BeautifulSoup, it would be very difficult to scrape Wikipedia's\u00a0<a href=\"https:\/\/en.wikipedia.org\/wiki\/S%26P_500\" target=\"_blank\" >S&amp;P500 historical data<\/a>. We would have to manually copy and paste each data point from each page, which is very tedious.<br><br>However, thanks to these tools, we can easily scrape the historical data in milliseconds using this code:<\/p>"},{"title":"What does Beautifulsoup do in Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/what-does-beautifulsoup-do-in-python\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/what-does-beautifulsoup-do-in-python\/","description":"<p>BeautifulSoup parses the HTML allowing you to extract information from it.<\/p>\n<p>When doing web scraping, you will usually not be interested in the HTML on the page, but in the underlying data. This is where BeautifulSoup comes into play.<\/p>\n<p>BeautifulSoup will take that HTML and turn it into the data you're interested in. Here is a quick example on how to extract the title of a webpage:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> requests\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> bs4 <span style=\"color:#f92672\">import<\/span> BeautifulSoup\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>response <span style=\"color:#f92672\">=<\/span> requests<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;https:\/\/news.ycombinator.com\/&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>soup <span style=\"color:#f92672\">=<\/span> BeautifulSoup(response<span style=\"color:#f92672\">.<\/span>content, <span style=\"color:#e6db74\">&#39;html.parser&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># The title tag of the page<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>print(soup<span style=\"color:#f92672\">.<\/span>title)\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">&gt;<\/span> <span style=\"color:#f92672\">&lt;<\/span>title<span style=\"color:#f92672\">&gt;<\/span>Hacker News<span style=\"color:#f92672\">&lt;\/<\/span>title<span style=\"color:#f92672\">&gt;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># The title of the page as string<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>print(soup<span style=\"color:#f92672\">.<\/span>title<span style=\"color:#f92672\">.<\/span>string)\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">&gt;<\/span> Hacker News\n<\/span><\/span><\/code><\/pre><\/div><p>If you want to learn more about BeautifulSoup and how to extract links, custom attributes, siblings and more, feel free to check our <a href=\"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-beautiful-soup\/\" >BeautifulSoup tutorial<\/a>.<\/p>"},{"title":"Which Python library is used for web scraping?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/which-python-library-is-used-for-web-scraping\/","pubDate":"Fri, 15 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/python\/which-python-library-is-used-for-web-scraping\/","description":"<p>There are various Python libraries that can be used for web scraping, but the most popular ones are:<\/p>\n<h2 id=\"1-requests\">1. Requests:<\/h2>\n<p>Requests is an easy to use HTTP library, it abstracts the complexity of making HTTP\/1.1 requests behind a simple API so that you can focus on scraping the web page, and not on the request itself.\u00a0 So this tool will allow you to fetch the HTML\/JSON contents of any page.<\/p>"},{"title":"How to block resources in Playwright and Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-block-resources-in-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-block-resources-in-playwright\/","description":"<p>You can block resources in Playwright by making use of the <code>route<\/code> method of the <code>Page<\/code> or <code>Browser<\/code> object and registering an interceptor that rejects requests based on certain parameters. For instance, you can block all remote resources of image type. You can also filter the URL and block specific URLs.<\/p>\n<p>Here is some sample code that navigates to the ScrapingBee homepage while blocking all images and all URLs containing &quot;google&quot;:<\/p>"},{"title":"How to capture background requests and responses in Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-capture-background-requests-and-responses-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-capture-background-requests-and-responses-playwright\/","description":"<p>You can capture background requests and responses in Playwright by registering appropriate callback functions for the <code>request<\/code> and <code>response<\/code> events of the <code>Page<\/code> object.<\/p>\n<p>Here is some sample code that logs all requests and responses in Playwright:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright.sync_api <span style=\"color:#f92672\">import<\/span> sync_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">def<\/span> <span style=\"color:#a6e22e\">incercept_request<\/span>(request):\n<\/span><\/span><span style=\"display:flex;\"><span> print(<span style=\"color:#e6db74\">&#34;requested URL:&#34;<\/span>, request<span style=\"color:#f92672\">.<\/span>url)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">def<\/span> <span style=\"color:#a6e22e\">incercept_response<\/span>(response):\n<\/span><\/span><span style=\"display:flex;\"><span> print(<span style=\"color:#e6db74\">f<\/span><span style=\"color:#e6db74\">&#34;response URL: <\/span><span style=\"color:#e6db74\">{<\/span>response<span style=\"color:#f92672\">.<\/span>url<span style=\"color:#e6db74\">}<\/span><span style=\"color:#e6db74\">, Status: <\/span><span style=\"color:#e6db74\">{<\/span>response<span style=\"color:#f92672\">.<\/span>status<span style=\"color:#e6db74\">}<\/span><span style=\"color:#e6db74\">&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> sync_playwright() <span style=\"color:#66d9ef\">as<\/span> p:\n<\/span><\/span><span style=\"display:flex;\"><span> browser <span style=\"color:#f92672\">=<\/span> p<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> page <span style=\"color:#f92672\">=<\/span> browser<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span> \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># Register the middlewares<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>on(<span style=\"color:#e6db74\">&#34;request&#34;<\/span>, incercept_request)\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>on(<span style=\"color:#e6db74\">&#34;response&#34;<\/span>, incercept_response)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>)\n<\/span><\/span><\/code><\/pre><\/div><p><strong>Note:<\/strong> You can modify requests and responses via these middlewares as well!<\/p>"},{"title":"How to download a file with Playwright and Python?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-download-file-with-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-download-file-with-playwright\/","description":"<p>You can download a file with Playwright by targeting the file download button on the page using any <code>Locator<\/code> and clicking it. Alternatively, you can also extract the link from an anchor tag using the <code>get_attribute<\/code> method and then download the file using <code>requests<\/code>. This is better as sometimes the PDFs and other downloadable files will open natively in the browser instead of triggering a download on button click.<\/p>\n<p>Here is some sample code that downloads a random paper from arXiv using Playwright and <code>requests<\/code>:<\/p>"},{"title":"How to find elements by CSS selectors in Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-find-elements-by-css-selectors-in-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-find-elements-by-css-selectors-in-playwright\/","description":"<p>You can find elements by CSS selectors in Playwright by using the <code>locator<\/code> method of the <code>Page<\/code> object. Playwright can automatically detect that a CSS selector is being passed in as an argument. Alternatively, you can prepend your CSS selector with <code>css=<\/code> to make sure Playwright doesn't make a wrong guess.<\/p>\n<p>Here is some sample code that prints the title of ScrapingBee website by making use of CSS selectors:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright <span style=\"color:#f92672\">import<\/span> sync_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> sync_playwright() <span style=\"color:#66d9ef\">as<\/span> p:\n<\/span><\/span><span style=\"display:flex;\"><span> browser <span style=\"color:#f92672\">=<\/span> p<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless<span style=\"color:#f92672\">=<\/span><span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> page <span style=\"color:#f92672\">=<\/span> browser<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># Extract the title using CSS selector and print it<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> title <span style=\"color:#f92672\">=<\/span> page<span style=\"color:#f92672\">.<\/span>locator(<span style=\"color:#e6db74\">&#39;css=title&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> print(title<span style=\"color:#f92672\">.<\/span>text_content())\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to find elements by XPath selectors in Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-find-elements-by-xpath-in-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-find-elements-by-xpath-in-playwright\/","description":"<p>You can find elements by XPath selectors in Playwright by using the <code>locator<\/code> method of the <code>Page<\/code> object. Playwright can automatically detect that an XPath is being passed as an argument. Alternatively, you can prepend your XPath with <code>xpath=<\/code> to make sure Playwright doesn't make a wrong guess.<\/p>\n<p>Here is some sample code that prints the title of ScrapingBee website by making use of XPath selectors:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright <span style=\"color:#f92672\">import<\/span> sync_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> sync_playwright() <span style=\"color:#66d9ef\">as<\/span> p:\n<\/span><\/span><span style=\"display:flex;\"><span> browser <span style=\"color:#f92672\">=<\/span> p<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless<span style=\"color:#f92672\">=<\/span><span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> page <span style=\"color:#f92672\">=<\/span> browser<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># Extract the title using XPath selector and print it<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> title <span style=\"color:#f92672\">=<\/span> page<span style=\"color:#f92672\">.<\/span>locator(<span style=\"color:#e6db74\">&#39;xpath=\/\/title&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> print(title<span style=\"color:#f92672\">.<\/span>text_content())\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to load local files in Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-load-local-files-in-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-load-local-files-in-playwright\/","description":"<p>You can load local files in Playwright by passing in the absolute path of the file to the <code>goto<\/code> method of the <code>Page<\/code> object. Just make sure that you prepend <code>file:\/\/<\/code> to the path as well.<\/p>\n<p>Here is some sample code for opening a local file:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright <span style=\"color:#f92672\">import<\/span> sync_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> sync_playwright() <span style=\"color:#66d9ef\">as<\/span> p:\n<\/span><\/span><span style=\"display:flex;\"><span> browser <span style=\"color:#f92672\">=<\/span> p<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless<span style=\"color:#f92672\">=<\/span><span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> context <span style=\"color:#f92672\">=<\/span> browser<span style=\"color:#f92672\">.<\/span>new_context()\n<\/span><\/span><span style=\"display:flex;\"><span> page <span style=\"color:#f92672\">=<\/span> context<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># open a local file <\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;file:\/\/path\/to\/file.html&#34;<\/span>)\n<\/span><\/span><\/code><\/pre><\/div><p><strong>Note:<\/strong> The path would look like this for windows: <code>file:\/\/C:\/path\/to\/file.html<\/code><\/p>"},{"title":"How to run Playwright in Jupyter notebooks?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-run-playwright-in-jupyter-notebooks\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-run-playwright-in-jupyter-notebooks\/","description":"<p>You can run Playwright in Jupyter notebooks by making use of Playwright's async API. This is required because Jupyter notebooks use an asyncio event loop and you need to use Playwright's async API as well.<\/p>\n<p>Here is some sample code that navigates to ScrapingBee's homepage while making use of the async API:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright.async_api <span style=\"color:#f92672\">import<\/span> async_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>pw <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> async_playwright()<span style=\"color:#f92672\">.<\/span>start()\n<\/span><\/span><span style=\"display:flex;\"><span>browser <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> pw<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>page <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">await<\/span> browser<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">await<\/span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com\/&#34;<\/span>)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to save and load cookies in Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-save-and-load-cookies-in-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-save-and-load-cookies-in-playwright\/","description":"<p>You can save and load cookies in Playwright by making use of the <code>cookies()<\/code> and <code>add_cookies()<\/code> methods of the browser context. The former returns the current cookies whereas the latter helps you add new cookies and\/or overwrite the old ones.<\/p>\n<p>Here is some sample code for saving and loading the cookies in Playwright while browsing the ScrapingBee website:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> json\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright.sync_api <span style=\"color:#f92672\">import<\/span> sync_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> sync_playwright() <span style=\"color:#66d9ef\">as<\/span> p:\n<\/span><\/span><span style=\"display:flex;\"><span> browser <span style=\"color:#f92672\">=<\/span> p<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> context <span style=\"color:#f92672\">=<\/span> browser<span style=\"color:#f92672\">.<\/span>new_context()\n<\/span><\/span><span style=\"display:flex;\"><span> page <span style=\"color:#f92672\">=<\/span> context<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span> \n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># Save the cookies<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">with<\/span> open(<span style=\"color:#e6db74\">&#34;cookies.json&#34;<\/span>, <span style=\"color:#e6db74\">&#34;w&#34;<\/span>) <span style=\"color:#66d9ef\">as<\/span> f:\n<\/span><\/span><span style=\"display:flex;\"><span> f<span style=\"color:#f92672\">.<\/span>write(json<span style=\"color:#f92672\">.<\/span>dumps(context<span style=\"color:#f92672\">.<\/span>cookies()))\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># Load the cookies<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#66d9ef\">with<\/span> open(<span style=\"color:#e6db74\">&#34;cookies.json&#34;<\/span>, <span style=\"color:#e6db74\">&#34;r&#34;<\/span>) <span style=\"color:#66d9ef\">as<\/span> f:\n<\/span><\/span><span style=\"display:flex;\"><span> cookies <span style=\"color:#f92672\">=<\/span> json<span style=\"color:#f92672\">.<\/span>loads(f<span style=\"color:#f92672\">.<\/span>read())\n<\/span><\/span><span style=\"display:flex;\"><span> context<span style=\"color:#f92672\">.<\/span>add_cookies(cookies)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to take a screenshot with Playwright?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-take-screenshot-with-playwright\/","pubDate":"Thu, 14 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/playwright\/how-to-take-screenshot-with-playwright\/","description":"<p>You can take a screenshot with Playwright via the <code>screenshot<\/code> method of the <code>Page<\/code> object. You can optionally pass in the <code>full_page<\/code> boolean argument to the <code>screenshot<\/code> method to save the screenshot of the whole page.<\/p>\n<p>Here is some sample code that navigates to ScrapingBee's homepage and saves the screenshot in a <code>screenshot.png<\/code> file:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> playwright.sync_api <span style=\"color:#f92672\">import<\/span> sync_playwright\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">with<\/span> sync_playwright() <span style=\"color:#66d9ef\">as<\/span> p:\n<\/span><\/span><span style=\"display:flex;\"><span> browser <span style=\"color:#f92672\">=<\/span> p<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>launch(headless <span style=\"color:#f92672\">=<\/span> <span style=\"color:#66d9ef\">False<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> page <span style=\"color:#f92672\">=<\/span> browser<span style=\"color:#f92672\">.<\/span>new_page()\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>goto(<span style=\"color:#e6db74\">&#34;https:\/\/scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span> <span style=\"color:#75715e\"># Save the screenshot<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span> page<span style=\"color:#f92672\">.<\/span>screenshot(path<span style=\"color:#f92672\">=<\/span><span style=\"color:#e6db74\">&#34;screenshot.png&#34;<\/span>)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to block image loading in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-block-image-loading-selenium\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-block-image-loading-selenium\/","description":"<p>You can block image loading in Selenium by passing in the custom <code>ChromeOptions<\/code> object and setting the appropriate content settings preferences.<\/p>\n<p>Here is some sample code that navigates to the ScrapingBee homepage while blocking images:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium <span style=\"color:#f92672\">import<\/span> webdriver\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium.webdriver.common.by <span style=\"color:#f92672\">import<\/span> By\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>DRIVER_PATH <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#39;\/path\/to\/chromedriver&#39;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Block images via ChromeOptions object<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>chrome_options <span style=\"color:#f92672\">=<\/span> webdriver<span style=\"color:#f92672\">.<\/span>ChromeOptions()\n<\/span><\/span><span style=\"display:flex;\"><span>prefs <span style=\"color:#f92672\">=<\/span> {<span style=\"color:#e6db74\">&#34;profile.managed_default_content_settings.images&#34;<\/span>: <span style=\"color:#ae81ff\">2<\/span>}\n<\/span><\/span><span style=\"display:flex;\"><span>chrome_options<span style=\"color:#f92672\">.<\/span>add_experimental_option(<span style=\"color:#e6db74\">&#34;prefs&#34;<\/span>, prefs)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Pass in custom options while creating a Chrome object<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver <span style=\"color:#f92672\">=<\/span> webdriver<span style=\"color:#f92672\">.<\/span>Chrome(options<span style=\"color:#f92672\">=<\/span>chrome_options, executable_path<span style=\"color:#f92672\">=<\/span>DRIVER_PATH)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Navigate to ScrapingBee while blocking all images<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;http:\/\/www.scrapingbee.com&#34;<\/span>)\n<\/span><\/span><\/code><\/pre><\/div><p>The code for Firefox looks fairly similar as well:<\/p>"},{"title":"How to get page source in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-get-page-source-selenium\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-get-page-source-selenium\/","description":"<p>You can easily get the page source in Selenium via the <code>page_source<\/code> attribute of the Selenium web driver.<\/p>\n<p>Here is some sample code for getting the page source of the ScrapingBee website:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium <span style=\"color:#f92672\">import<\/span> webdriver\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>DRIVER_PATH <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#39;\/path\/to\/chromedriver&#39;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver <span style=\"color:#f92672\">=<\/span> webdriver<span style=\"color:#f92672\">.<\/span>Chrome(executable_path<span style=\"color:#f92672\">=<\/span>DRIVER_PATH)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;http:\/\/www.scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Print page source on screen<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>print(driver<span style=\"color:#f92672\">.<\/span>page_source)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to scroll to an element in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-scroll-to-element-selenium\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-scroll-to-element-selenium\/","description":"<p>You can scroll to an element in Selenium by making use of the <code>execute_script<\/code> method and passing in a Javascript expression to do the actual scrolling. You can use any supported Selenium selectors to target any <code>WebElement<\/code> and then pass that to the <code>execute_script<\/code> as an argument.<\/p>\n<p>Here is some example code that navigates to the ScrapingBee homepage and scrolls to the <code>footer<\/code> tag:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium <span style=\"color:#f92672\">import<\/span> webdriver\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium.webdriver.common.by <span style=\"color:#f92672\">import<\/span> By\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>DRIVER_PATH <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#39;\/path\/to\/chromedriver&#39;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver <span style=\"color:#f92672\">=<\/span> webdriver<span style=\"color:#f92672\">.<\/span>Chrome(executable_path<span style=\"color:#f92672\">=<\/span>DRIVER_PATH)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;http:\/\/www.scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Javascript expression to scroll to a particular element<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># arguments[0] refers to the first argument that is later passed<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># in to execute_script method<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>js_code <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#34;arguments[0].scrollIntoView();&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># The WebElement you want to scroll to<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>element <span style=\"color:#f92672\">=<\/span> driver<span style=\"color:#f92672\">.<\/span>find_element(By<span style=\"color:#f92672\">.<\/span>TAG_NAME, <span style=\"color:#e6db74\">&#39;footer&#39;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Execute the JS script<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>execute_script(js_code, element)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to take a screenshot with Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-take-screenshot-selenium\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-take-screenshot-selenium\/","description":"<p>You can take a screenshot using the selenium web driver via the <code>save_screenshot<\/code> method.<\/p>\n<p>Here is some sample code for navigating to the ScrapingBee website and taking a screenshot of the page:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium <span style=\"color:#f92672\">import<\/span> webdriver\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>DRIVER_PATH <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#39;\/path\/to\/chromedriver&#39;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver <span style=\"color:#f92672\">=<\/span> webdriver<span style=\"color:#f92672\">.<\/span>Chrome(executable_path<span style=\"color:#f92672\">=<\/span>DRIVER_PATH)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;http:\/\/www.scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>screenshot_path <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#34;\/path\/to\/screenshot.png&#34;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Save the screenshot<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>save_screenshot(screenshot_path)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":"How to wait for the page to load in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-wait-for-page-load-selenium\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-wait-for-page-load-selenium\/","description":"<p>You can wait for the page to load in Selenium via multiple strategies:<\/p>\n<ul>\n<li>Explicit wait: Wait until a particular condition is met. For E.g. a particular element becomes visible on the screen<\/li>\n<li>Implicit wait: Wait for a particular time interval<\/li>\n<li>Fluent wait: Similar to explicit wait but provides additional control via timeouts and polling frequency<\/li>\n<\/ul>\n<p>By default, the web driver waits for the page to load (but not for the AJAX requests initiated with the page load) and you can instruct it to explicitly wait for an element by making use of the <code>WebDriverWait<\/code> and the <code>expected_conditions<\/code> module.<\/p>"},{"title":"Selenium: chromedriver executable needs to be in PATH?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/chromedriver-executable-needs-to-be-in-path\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/chromedriver-executable-needs-to-be-in-path\/","description":"<p>You need to make sure that the <code>chromedriver<\/code> executable is available in your <code>PATH<\/code>. Otherwise, Selenium will throw this error:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span>selenium<span style=\"color:#f92672\">.<\/span>common<span style=\"color:#f92672\">.<\/span>exceptions<span style=\"color:#f92672\">.<\/span>WebDriverException: Message: <span style=\"color:#e6db74\">&#39;chromedriver&#39;<\/span> executable needs to be <span style=\"color:#f92672\">in<\/span> PATH<span style=\"color:#f92672\">.<\/span> Please see https:<span style=\"color:#f92672\">\/\/<\/span>chromedriver<span style=\"color:#f92672\">.<\/span>chromium<span style=\"color:#f92672\">.<\/span>org<span style=\"color:#f92672\">\/<\/span>home\n<\/span><\/span><\/code><\/pre><\/div><p>The best way to fix this error is to use the <code>webdriver-manager<\/code> package. It will make sure that you have a valid <code>chromedriver<\/code> executable in <code>PATH<\/code> and if it is not available, it will download it automatically. You can install it using PIP:<\/p>"},{"title":"Selenium: geckodriver executable needs to be in PATH?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/geckodriver-executable-needs-to-be-in-path\/","pubDate":"Tue, 12 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/geckodriver-executable-needs-to-be-in-path\/","description":"<p>You need to make sure that the <code>geckodriver<\/code> executable is available in your <code>PATH<\/code>. Otherwise, Selenium will throw this error:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span>selenium<span style=\"color:#f92672\">.<\/span>common<span style=\"color:#f92672\">.<\/span>exceptions<span style=\"color:#f92672\">.<\/span>WebDriverException: Message: <span style=\"color:#e6db74\">&#39;geckodriver&#39;<\/span> executable needs to be <span style=\"color:#f92672\">in<\/span> PATH<span style=\"color:#f92672\">.<\/span>\n<\/span><\/span><\/code><\/pre><\/div><p>The best way to fix this error is to use the <code>webdriver-manager<\/code> package. It will make sure that you have a valid <code>geckodriver<\/code> executable in <code>PATH<\/code> and if it is not available, it will download it automatically. You can install it using PIP:<\/p>"},{"title":"How to find elements by XPath in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-find-elements-by-xpath-selenium\/","pubDate":"Mon, 11 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-find-elements-by-xpath-selenium\/","description":"<p>You can find elements by XPath selectors in Selenium by utilizing the <code>find_element<\/code> and <code>find_elements<\/code> methods and the <code>By.XPATH<\/code> argument.<\/p>\n<p><code>find_element<\/code> returns the first occurence of the XPath selector being used, while <code>find_elements<\/code> returns all elements of the page that match the selector. And <code>By.XPATH<\/code> simply tells Selenium to use the XPATH selector matching method.<\/p>\n<p><strong>Tip:<\/strong> <code>\/\/<\/code> in XPath matches an element wherever it is on the page whereas <code>\/<\/code> matches a direct child element.<\/p>"},{"title":"How to save and load cookies in Selenium?","link":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-save-cookies-selenium\/","pubDate":"Mon, 11 Jan 2021 09:10:27 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraping-questions\/selenium\/how-to-save-cookies-selenium\/","description":"<p>You can save and load cookies in Selenium using the <code>get_cookies<\/code> method of the web driver object and the <code>pickle<\/code> library.<\/p>\n<p>Here is some sample code to save and load cookies while navigating to the ScrapingBee website.<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-python\" data-lang=\"python\"><span style=\"display:flex;\"><span><span style=\"color:#f92672\">import<\/span> pickle\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#f92672\">from<\/span> selenium <span style=\"color:#f92672\">import<\/span> webdriver\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>DRIVER_PATH <span style=\"color:#f92672\">=<\/span> <span style=\"color:#e6db74\">&#39;\/path\/to\/chromedriver&#39;<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver <span style=\"color:#f92672\">=<\/span> webdriver<span style=\"color:#f92672\">.<\/span>Chrome(executable_path<span style=\"color:#f92672\">=<\/span>DRIVER_PATH)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span>driver<span style=\"color:#f92672\">.<\/span>get(<span style=\"color:#e6db74\">&#34;http:\/\/www.scrapingbee.com&#34;<\/span>)\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Save cookies<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>pickle<span style=\"color:#f92672\">.<\/span>dump( driver<span style=\"color:#f92672\">.<\/span>get_cookies() , open(<span style=\"color:#e6db74\">&#34;cookies.pkl&#34;<\/span>,<span style=\"color:#e6db74\">&#34;wb&#34;<\/span>))\n<\/span><\/span><span style=\"display:flex;\"><span>\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#75715e\"># Load cookies<\/span>\n<\/span><\/span><span style=\"display:flex;\"><span>cookies <span style=\"color:#f92672\">=<\/span> pickle<span style=\"color:#f92672\">.<\/span>load(open(<span style=\"color:#e6db74\">&#34;cookies.pkl&#34;<\/span>, <span style=\"color:#e6db74\">&#34;rb&#34;<\/span>))\n<\/span><\/span><span style=\"display:flex;\"><span><span style=\"color:#66d9ef\">for<\/span> cookie <span style=\"color:#f92672\">in<\/span> cookies:\n<\/span><\/span><span style=\"display:flex;\"><span> driver<span style=\"color:#f92672\">.<\/span>add_cookie(cookie)\n<\/span><\/span><\/code><\/pre><\/div>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/cfml\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/cfml\/","description":{}},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/dart\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/dart\/","description":{}},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/elixir\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/elixir\/","description":"<p>If you want to learn <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-elixir\/\" >web scraping with Elixir<\/a>, check out our tutorial.<\/p>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/go\/","description":"<p>If you want to learn <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-go\/\" >web scraping with Go<\/a>, check out our tutorial.<\/p>\n<p>You will learn how to build your first web scraper with Go, and how to use the Colly library.<\/p>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/java\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/java\/","description":"<p>If you want to learn web scraping with Java, check out our tutorials:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scrapingbee.com\/java-webscraping-book\/\" >Web scraping with Java<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/introduction-to-chrome-headless\/\" >Introduction to Chrome Headless with Java<\/a><\/li>\n<\/ul>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/javascript-fetch\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/javascript-fetch\/","description":"<p>Due to the enormous advancements it has seen and the advent of the NodeJS runtime, JavaScript has emerged as one of the most well-liked and often used languages. The necessary tools are now available for JavaScript, whether it's for a web or mobile application.<\/p>\n<p>And of course, web scraping.<\/p>\n<p>To learn more about JavaScript and web scraping checkout our tutorials:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/\" >Web Scraping with Node JS<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/html-parsing-jquery\/\" >Using jQuery to Parse HTML and Extract Data<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/cheerio-npm\/\" >Using the Cheerio NPM Package for Web Scraping<\/a><\/li>\n<\/ul>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/json\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/json\/","description":{}},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/matlab\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/matlab\/","description":{}},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/node-axios\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/node-axios\/","description":"<p>JavaScript has become one of the most well-liked and often used languages as a result of the significant developments it has experienced and the introduction of the NodeJS runtime. Whether it's for a web application or a mobile application, JavaScript now has the required capabilities available.<\/p>\n<p>And of course, web scraping.<\/p>\n<p>To learn more about JavaScript and web scraping checkout our tutorials:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/html-parsing-jquery\/\" >Using jQuery to Parse HTML and Extract Data<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/cheerio-npm\/\" >Using the Cheerio NPM Package for Web Scraping<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/\" >Web Scraping with Node JS<\/a><\/li>\n<\/ul>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/node-fetch\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/node-fetch\/","description":"<p>JavaScript has evolved as one of the most popular and widely used languages as a result of tremendous developments and the introduction of the NodeJS runtime. JavaScript development tools are now available, whether for a web or mobile application.<\/p>\n<p>And of course, web scraping.<\/p>\n<p>To learn more about JavaScript and web scraping checkout our tutorials:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/html-parsing-jquery\/\" >Using jQuery to Parse HTML and Extract Data<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/cheerio-npm\/\" >Using the Cheerio NPM Package for Web Scraping<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/\" >Web Scraping with Node JS<\/a><\/li>\n<\/ul>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/node-request\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/node-request\/","description":"<p>JavaScript is known for both its ease of use and its power. With JavaScript it is very easy to create web applications and web services.<\/p>\n<p>And of course, web scraping.<\/p>\n<p>To learn more about JavaScript and web scraping checkout our tutorials:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/html-parsing-jquery\/\" >Using jQuery to Parse HTML and Extract Data<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/cheerio-npm\/\" >Using the Cheerio NPM Package for Web Scraping<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-javascript\/\" >Web Scraping with Node JS<\/a><\/li>\n<\/ul>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/php\/","description":"<p>We have written a full <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-php\/\" >PHP web scraping tutorial<\/a>, check it out.<\/p>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/python\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/python\/","description":"<p>Python is a versatile and trending programming language. It is used for web scraping, data analysis, and much more.<\/p>\n<p>If you want to learn more about web scraping in Python check out our tutorials:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-101-with-python\/\" >Web Scraping with Python<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/python-web-scraping-beautiful-soup\/\" >Web Scraping with Python and BeautifulSoup<\/a><\/li>\n<li><a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-with-scrapy\/\" >Web Scraping with Python and Scrapy<\/a><\/li>\n<\/ul>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/r\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/r\/","description":"<p>If you want to start doing web scraping with R, you can read our tutorial: <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-r\/\" >R and Web Scraping<\/a>.<\/p>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/curl-converter\/rust\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/curl-converter\/rust\/","description":"<p>If you're starting with Rust and web scraping, you can read our tutorial: <a href=\"https:\/\/www.scrapingbee.com\/blog\/web-scraping-rust\/\" >Rust and Web Scraping<\/a>.<\/p>"},{"title":{},"link":"https:\/\/www.scrapingbee.com\/landing\/dev\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/landing\/dev\/","description":"<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n\t<meta name=\"generator\" content=\"Hugo 0.88.1\" \/>\n <meta name=\"robots\" content=\"noindex, nofollow\" \/>\n <title>ScrapingBee, the best web scraping API.<\/title>\n <meta charset=\"utf-8\" \/>\n <meta name=\"description\" content=\"ScrapingBee is a Web Scraping API that handles proxies and Headless browser for you, so you can focus on extracting the data you want, and nothing else.\" \/>\n <meta name=\"viewport\" content=\"width=device-width\" initial-scale=\"1\" maximum-scale=\"1\" \/>\n <meta property=\"og:type\" content=\"article\" \/>\n <meta property=\"og:title\" content=\"ScrapingBee, the best web scraping API.\" \/>\n <meta property=\"og:description\" content=\"ScrapingBee is a Web Scraping API that handles proxies and Headless browser for you, so you can focus on extracting the data you want, and nothing else.\" \/>\n <meta property=\"og:type\" content=\"website\" \/>\n <meta property=\"og:image\" content=\"https:\/\/www.scrapingbee.com\/images\/cover.png\" \/>\n <meta property=\"og:url\" content=\"https:\/\/www.scrapingbee.com\/\" \/>\n <meta property=\"twitter:card\" content=\"summary_large_image\" \/>\n <meta property=\"twitter:creator\" content=\"@scrapingbee\" \/>\n <meta property=\"twitter:domain\" content=\"https:\/\/www.scrapingbee.com\" \/>\n <meta property=\"twitter:site\" content=\"@scrapingbee\" \/>\n <meta property=\"twitter:title\" content=\"ScrapingBee, the best web scraping API.\" \/>\n <meta property=\"twitter:description\" content=\"ScrapingBee is a Web Scraping API that handles proxies and Headless browser for you, so you can focus on extracting the data you want, and nothing else.\" \/>\n <meta property=\"twitter:image\" content=\"https:\/\/www.scrapingbee.com\/images\/cover.png\" \/>\n <link rel=\"alternate\" type=\"application\/rss+xml\" title=\"The ScrapingBee Blog\" href=\"https:\/\/www.scrapingbee.com\/index.xml\" \/>\n <link rel=\"icon\" type=\"image\/png\" href=\"https:\/\/www.scrapingbee.com\/\/images\/favico.png\" \/>\n <link rel=\"alternate icon\" href=\"https:\/\/www.scrapingbee.com\/\/images\/favico.svg\">\n <link rel=\"canonical\" href=\"https:\/\/www.scrapingbee.com\/\" \/>\n<style>\n *,::before,::after{box-sizing:border-box;border-width:0;border-style:solid;border-color:initial}::before,::after{--tw-content:''}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:Circular Std,ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,segoe ui,Roboto,helvetica neue,Arial,noto sans,sans-serif,apple color emoji,segoe ui emoji,segoe ui symbol,noto color emoji}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr[title]{-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#24292e}input:-ms-input-placeholder,textarea:-ms-input-placeholder{opacity:1;color:#24292e}input::placeholder,textarea::placeholder{opacity:1;color:#24292e}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}.container{margin-left:auto;margin-right:auto;max-width:1204px;padding-left:20px;padding-right:20px}@media(min-width:1024px){.container{padding-left:30px;padding-right:30px}}.btn{display:inline-flex;height:48px;align-items:center;justify-content:center;border-radius:4px;--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity));padding-left:25px;padding-right:25px;font-size:16px;font-weight:700;--tw-text-opacity:1;color:rgb(255 255 255\/var(--tw-text-opacity));transition-property:opacity;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.btn:hover{--tw-bg-opacity:0.9}@media(min-width:1024px){.btn{height:56px}.btn{font-size:18px}}.btn-sm{height:45px;font-size:16px}.btn-black-o{border-width:2px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));background-color:transparent;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.btn-black-o:hover{--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(255 255 255\/var(--tw-text-opacity))}.btn-yellow{--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.link{font-weight:700;-webkit-text-decoration-line:underline;text-decoration-line:underline}.link:hover{-webkit-text-decoration-line:none;text-decoration-line:none}body{font-size:18px;line-height:1.77;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}blockquote,q{margin:0;padding:0;border:0;outline:0;font-size:100%;vertical-align:baseline;background:0 0}blockquote q:not(.quote):before,blockquote q:not(.quote):after{content:'\"'}blockquote>ol{list-style:decimal;margin-left:30px}h1,h2,h3{font-weight:700}h1{font-size:40px;line-height:1.22}@media(min-width:1024px){h1{font-size:48px}h1{font-size:56px}}h2{font-size:36px;line-height:1.26}@media(min-width:1024px){h2{font-size:48px}}h3{font-size:30px;line-height:1.33}@media(min-width:1024px){h3{font-size:36px}}h4{font-size:22px;line-height:1.33}@media(min-width:1024px){h4{font-size:24px}}h5{font-size:20px;line-height:1.2}h6{font-size:16px;line-height:1.25}*,::before,::after{--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-transform:translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y));--tw-border-opacity:1;border-color:rgb(90 113 132\/var(--tw-border-opacity));--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-brightness:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-contrast:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-grayscale:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-hue-rotate:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-invert:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-saturate:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-sepia:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-drop-shadow:var(--tw-empty,\/*!*\/ \/*!*\/);--tw-filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.visible{visibility:visible}.static{position:static}.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.sticky{position:-webkit-sticky;position:sticky}.left-[0]{left:0}.top-[80px]{top:80px}.top-[0]{top:0}.right-[0]{right:0}.-top-[3px]{top:-3px}.-right-[3px]{right:-3px}.left-[2px]{left:2px}.top-[2px]{top:2px}.top-[3px]{top:3px}.z-[101]{z-index:101}.z-[100]{z-index:100}.z-[9]{z-index:9}.z-1{z-index:1}.order-2{order:2}.order-1{order:1}.m-[0]{margin:0}.-m-[10px]{margin:-10px}.m-auto{margin:auto}.my-[4px]{margin-top:4px;margin-bottom:4px}.-mx-[15px]{margin-left:-15px;margin-right:-15px}.mx-auto{margin-left:auto;margin-right:auto}.my-[2px]{margin-top:2px;margin-bottom:2px}.mx-[2px]{margin-left:2px;margin-right:2px}.my-[10px]{margin-top:10px;margin-bottom:10px}.-mx-[20px]{margin-left:-20px;margin-right:-20px}.-mx-[11px]{margin-left:-11px;margin-right:-11px}.-mx-[21px]{margin-left:-21px;margin-right:-21px}.-mx-[12px]{margin-left:-12px;margin-right:-12px}.-mx-[50px]{margin-left:-50px;margin-right:-50px}.-mx-[9px]{margin-left:-9px;margin-right:-9px}.-mx-[30px]{margin-left:-30px;margin-right:-30px}.-mx-[23px]{margin-left:-23px;margin-right:-23px}.-mx-[10px]{margin-left:-10px;margin-right:-10px}.-my-[2px]{margin-top:-2px;margin-bottom:-2px}.-mx-[16px]{margin-left:-16px;margin-right:-16px}.-my-[20px]{margin-top:-20px;margin-bottom:-20px}.-my-[19px]{margin-top:-19px;margin-bottom:-19px}.mt-[30px]{margin-top:30px}.mb-[12px]{margin-bottom:12px}.mt-[60px]{margin-top:60px}.mb-[20px]{margin-bottom:20px}.mb-[11px]{margin-bottom:11px}.mb-[33px]{margin-bottom:33px}.mb-[17px]{margin-bottom:17px}.mb-[100px]{margin-bottom:100px}.mt-[66px]{margin-top:66px}.mb-[21px]{margin-bottom:21px}.mt-[70px]{margin-top:70px}.mr-[12px]{margin-right:12px}.mt-[20px]{margin-top:20px}.ml-[12px]{margin-left:12px}.mb-[40px]{margin-bottom:40px}.mb-[24px]{margin-bottom:24px}.mb-[8%]{margin-bottom:8%}.mb-[10px]{margin-bottom:10px}.mb-[2px]{margin-bottom:2px}.mb-[18px]{margin-bottom:18px}.mb-[6px]{margin-bottom:6px}.mr-[6px]{margin-right:6px}.mb-[19px]{margin-bottom:19px}.mb-[30px]{margin-bottom:30px}.mr-[9px]{margin-right:9px}.mr-[5px]{margin-right:5px}.ml-[3px]{margin-left:3px}.mr-[1px]{margin-right:1px}.ml-[10px]{margin-left:10px}.mb-[48px]{margin-bottom:48px}.mb-[5px]{margin-bottom:5px}.mb-[25px]{margin-bottom:25px}.mt-[19px]{margin-top:19px}.mr-[20px]{margin-right:20px}.mt-[12px]{margin-top:12px}.mt-[32px]{margin-top:32px}.mb-[32px]{margin-bottom:32px}.ml-[8px]{margin-left:8px}.mb-[15px]{margin-bottom:15px}.mb-[60px]{margin-bottom:60px}.mb-[13px]{margin-bottom:13px}.mb-[3px]{margin-bottom:3px}.mt-[100px]{margin-top:100px}.mb-[50px]{margin-bottom:50px}.mb-[70px]{margin-bottom:70px}.mr-[10px]{margin-right:10px}.mb-[45px]{margin-bottom:45px}.-ml-[9px]{margin-left:-9px}.-ml-[20px]{margin-left:-20px}.mb-[80px]{margin-bottom:80px}.mb-[54px]{margin-bottom:54px}.mb-[27px]{margin-bottom:27px}.mb-[35px]{margin-bottom:35px}.mb-[4px]{margin-bottom:4px}.mt-[10px]{margin-top:10px}.mb-[66px]{margin-bottom:66px}.ml-[5px]{margin-left:5px}.mt-[13px]{margin-top:13px}.mb-[8px]{margin-bottom:8px}.mb-[38px]{margin-bottom:38px}.-mr-[4px]{margin-right:-4px}.ml-[4px]{margin-left:4px}.mb-[31px]{margin-bottom:31px}.mb-[14px]{margin-bottom:14px}.-mb-px{margin-bottom:-1px}.mb-[34px]{margin-bottom:34px}.mb-[16px]{margin-bottom:16px}.ml-[20px]{margin-left:20px}.mr-[24px]{margin-right:24px}.ml-[6px]{margin-left:6px}.mb-[9px]{margin-bottom:9px}.mt-[9px]{margin-top:9px}.mb-[120px]{margin-bottom:120px}.-mr-[20px]{margin-right:-20px}.mt-[80px]{margin-top:80px}.mb-[36px]{margin-bottom:36px}.-mb-[4px]{margin-bottom:-4px}.block{display:block}.inline-block{display:inline-block}.inline{display:inline}.flex{display:flex}.inline-flex{display:inline-flex}.table{display:table}.grid{display:grid}.contents{display:contents}.hidden{display:none}.h-[12px]{height:12px}.h-screen{height:100vh}.h-[35px]{height:35px}.h-[204px]{height:204px}.h-full{height:100%}.h-auto{height:auto}.h-[100px]{height:100px}.h-[40px]{height:40px}.h-[32px]{height:32px}.h-[4px]{height:4px}.h-[56px]{height:56px}.h-[150px]{height:150px}.h-[58px]{height:58px}.h-[24px]{height:24px}.h-[80px]{height:80px}.h-[86px]{height:86px}.h-[30px]{height:30px}.h-[600px]{height:600px}.max-h-[832px]{max-height:832px}.min-h-[52px]{min-height:52px}.w-full{width:100%}.w-[35px]{width:35px}.w-[40px]{width:40px}.w-[100px]{width:100px}.w-[5px]\\\/12{width:41.666667%}.w-[2px]\\\/12{width:16.666667%}.w-[160px]{width:160px}.w-[182px]{width:182px}.w-[123px]{width:123px}.w-[61px]{width:61px}.w-[56px]{width:56px}.w-[195px]{width:195px}.w-[24px]{width:24px}.w-[25%]{width:25%}.w-[86px]{width:86px}.w-[36px]{width:36px}.w-[40%]{width:40%}.w-[30%]{width:30%}.w-[1px]\\\/2{width:50%}.w-[30px]{width:30px}.w-auto{width:auto}.w-[600px]{width:600px}.w-[20px]{width:20px}.min-w-[120px]{min-width:120px}.min-w-[500px]{min-width:500px}.min-w-[900px]{min-width:900px}.min-w-[222px]{min-width:222px}.min-w-full{min-width:100%}.max-w-screen-lg{max-width:1280px}.max-w-[894px]{max-width:894px}.max-w-[620px]{max-width:620px}.max-w-full{max-width:100%}.max-w-[1276px]{max-width:1276px}.max-w-[970px]{max-width:970px}.max-w-screen-xl{max-width:1440px}.max-w-[508px]{max-width:508px}.max-w-[321px]{max-width:321px}.max-w-none{max-width:none}.max-w-[1024px]{max-width:1024px}.max-w-[728px]{max-width:728px}.max-w-[1292px]{max-width:1292px}.max-w-[404px]{max-width:404px}.max-w-[1277px]{max-width:1277px}.max-w-[542px]{max-width:542px}.max-w-[1308px]{max-width:1308px}.flex-auto{flex:auto}.flex-1{flex:1}.flex-shrink-0{flex-shrink:0}.flex-grow{flex-grow:1}.grow{flex-grow:1}.basis-0{flex-basis:0}.transform{transform:var(--tw-transform)}.cursor-text{cursor:text}.cursor-pointer{cursor:pointer}.resize{resize:both}.flex-row{flex-direction:row}.flex-col{flex-direction:column}.flex-col-reverse{flex-direction:column-reverse}.flex-wrap{flex-wrap:wrap}.items-start{align-items:flex-start}.items-center{align-items:center}.justify-end{justify-content:flex-end}.justify-center{justify-content:center}.justify-between{justify-content:space-between}.justify-around{justify-content:space-around}.gap-[10px]{gap:10px}.gap-[20px]{gap:20px}.divide-y>:not([hidden])~:not([hidden]){--tw-divide-y-reverse:0;border-top-width:calc(1px * calc(1 - var(--tw-divide-y-reverse)));border-bottom-width:calc(1px * var(--tw-divide-y-reverse))}.divide-gray-200>:not([hidden])~:not([hidden]){--tw-divide-opacity:1;border-color:rgb(90 113 132\/var(--tw-divide-opacity))}.overflow-auto{overflow:auto}.overflow-hidden{overflow:hidden}.overflow-x-auto{overflow-x:auto}.overflow-y-scroll{overflow-y:scroll}.overscroll-x-auto{overscroll-behavior-x:auto}.text-ellipsis{text-overflow:ellipsis}.whitespace-nowrap{white-space:nowrap}.rounded-sm{border-radius:.125rem}.rounded-md{border-radius:.375rem}.rounded-[8px]{border-radius:8px}.rounded-[100%]{border-radius:100%}.rounded-[4px]{border-radius:4px}.rounded{border-radius:.25rem}.rounded-xl{border-radius:.75rem}.rounded-2xl{border-radius:1rem}.rounded-full{border-radius:9999px}.rounded-[5px]{border-radius:5px}.rounded-t-md{border-top-left-radius:.375rem;border-top-right-radius:.375rem}.rounded-b-md{border-bottom-right-radius:.375rem;border-bottom-left-radius:.375rem}.rounded-t-4{border-top-left-radius:4px;border-top-right-radius:4px}.rounded-l-xl{border-top-left-radius:.75rem;border-bottom-left-radius:.75rem}.rounded-r-xl{border-top-right-radius:.75rem;border-bottom-right-radius:.75rem}.border{border-width:1px}.border-2{border-width:2px}.border-4{border-width:4px}.border-t-2{border-top-width:2px}.border-t{border-top-width:1px}.border-r{border-right-width:1px}.border-b{border-bottom-width:1px}.border-b-0{border-bottom-width:0}.border-l-2{border-left-width:2px}.border-b-2{border-bottom-width:2px}.border-l{border-left-width:1px}.border-r-2{border-right-width:2px}.border-solid{border-style:solid}.border-gray-400{--tw-border-opacity:1;border-color:rgb(36 41 46\/var(--tw-border-opacity))}.border-black-100{--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.border-gray-700{--tw-border-opacity:1;border-color:rgb(228 231 236\/var(--tw-border-opacity))}.border-gray-300{--tw-border-opacity:1;border-color:rgb(179 186 197\/var(--tw-border-opacity))}.border-gray-1400{--tw-border-opacity:1;border-color:rgb(217 218 219\/var(--tw-border-opacity))}.border-white{--tw-border-opacity:1;border-color:rgb(255 255 255\/var(--tw-border-opacity))}.border-blue-200{--tw-border-opacity:1;border-color:rgb(66 84 102\/var(--tw-border-opacity))}.border-green-700{--tw-border-opacity:1;border-color:rgb(21 128 61\/var(--tw-border-opacity))}.border-yellow-100{--tw-border-opacity:1;border-color:rgb(255 201 31\/var(--tw-border-opacity))}.border-yellow-400{--tw-border-opacity:1;border-color:rgb(250 173 19\/var(--tw-border-opacity))}.border-transparent{border-color:transparent}.border-gray-600{--tw-border-opacity:1;border-color:rgb(204 204 204\/var(--tw-border-opacity))}.border-gray-100{--tw-border-opacity:1;border-color:rgb(230 236 242\/var(--tw-border-opacity))}.border-gray-200{--tw-border-opacity:1;border-color:rgb(90 113 132\/var(--tw-border-opacity))}.border-green-500{--tw-border-opacity:1;border-color:rgb(34 197 94\/var(--tw-border-opacity))}.border-opacity-20{--tw-border-opacity:0.2}.bg-yellow-100{--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity))}.bg-white{--tw-bg-opacity:1;background-color:rgb(255 255 255\/var(--tw-bg-opacity))}.bg-gray-100{--tw-bg-opacity:1;background-color:rgb(230 236 242\/var(--tw-bg-opacity))}.bg-blue-100{--tw-bg-opacity:1;background-color:rgb(44 58 87\/var(--tw-bg-opacity))}.bg-black-100{--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity))}.bg-green-100{--tw-bg-opacity:1;background-color:rgb(220 252 231\/var(--tw-bg-opacity))}.bg-yellow-200{--tw-bg-opacity:1;background-color:rgb(255 244 210\/var(--tw-bg-opacity))}.bg-gray-900{--tw-bg-opacity:1;background-color:rgb(242 242 242\/var(--tw-bg-opacity))}.bg-gray-1000{--tw-bg-opacity:1;background-color:rgb(197 197 196\/var(--tw-bg-opacity))}.bg-blue-400{--tw-bg-opacity:1;background-color:rgb(27 37 56\/var(--tw-bg-opacity))}.bg-none{background-image:none}.object-cover{-o-object-fit:cover;object-fit:cover}.p-[10px]{padding:10px}.p-[0]{padding:0}.p-[1px]{padding:1px}.p-[20px]{padding:20px}.p-[60px]{padding:60px}.p-[3px]{padding:3px}.p-[5px]{padding:5px}.p-[38px]{padding:38px}.p-[15px]{padding:15px}.p-[40px]{padding:40px}.px-[37px]{padding-left:37px;padding-right:37px}.px-[4px]{padding-left:4px;padding-right:4px}.px-[15px]{padding-left:15px;padding-right:15px}.py-[6px]{padding-top:6px;padding-bottom:6px}.px-[10px]{padding-left:10px;padding-right:10px}.py-[50px]{padding-top:50px;padding-bottom:50px}.py-[5px]{padding-top:5px;padding-bottom:5px}.py-[8px]{padding-top:8px;padding-bottom:8px}.px-[6px]{padding-left:6px;padding-right:6px}.px-[20px]{padding-left:20px;padding-right:20px}.px-[11px]{padding-left:11px;padding-right:11px}.py-[20px]{padding-top:20px;padding-bottom:20px}.px-[21px]{padding-left:21px;padding-right:21px}.px-[31px]{padding-left:31px;padding-right:31px}.py-[10px]{padding-top:10px;padding-bottom:10px}.px-[12px]{padding-left:12px;padding-right:12px}.px-[55px]{padding-left:55px;padding-right:55px}.px-[50px]{padding-left:50px;padding-right:50px}.px-[9px]{padding-left:9px;padding-right:9px}.px-[39px]{padding-left:39px;padding-right:39px}.px-[30px]{padding-left:30px;padding-right:30px}.py-[70px]{padding-top:70px;padding-bottom:70px}.py-[40px]{padding-top:40px;padding-bottom:40px}.px-[23px]{padding-left:23px;padding-right:23px}.py-[15px]{padding-top:15px;padding-bottom:15px}.py-[100px]{padding-top:100px;padding-bottom:100px}.py-[4px]{padding-top:4px;padding-bottom:4px}.px-[1px]{padding-left:1px;padding-right:1px}.py-[2px]{padding-top:2px;padding-bottom:2px}.px-[24px]{padding-left:24px;padding-right:24px}.py-[3px]{padding-top:3px;padding-bottom:3px}.py-[16px]{padding-top:16px;padding-bottom:16px}.px-[17px]{padding-left:17px;padding-right:17px}.px-[14px]{padding-left:14px;padding-right:14px}.px-[16px]{padding-left:16px;padding-right:16px}.py-[19px]{padding-top:19px;padding-bottom:19px}.pt-[6px]{padding-top:6px}.pb-[0]{padding-bottom:0}.pt-[60px]{padding-top:60px}.pb-[2px]{padding-bottom:2px}.pb-[16px]{padding-bottom:16px}.pt-[50px]{padding-top:50px}.pb-[80px]{padding-bottom:80px}.pt-[20px]{padding-top:20px}.pt-[19px]{padding-top:19px}.pr-[16px]{padding-right:16px}.pl-[18px]{padding-left:18px}.pb-[30px]{padding-bottom:30px}.pt-[18px]{padding-top:18px}.pt-[15px]{padding-top:15px}.pt-[11px]{padding-top:11px}.pb-[48px]{padding-bottom:48px}.pt-[2px]{padding-top:2px}.pt-[47px]{padding-top:47px}.pt-[3px]{padding-top:3px}.pl-[12px]{padding-left:12px}.pl-[9px]{padding-left:9px}.pr-[2px]{padding-right:2px}.pt-[80px]{padding-top:80px}.pb-[50px]{padding-bottom:50px}.pb-[70px]{padding-bottom:70px}.pr-[20px]{padding-right:20px}.pb-[20px]{padding-bottom:20px}.pt-[31px]{padding-top:31px}.pb-[38px]{padding-bottom:38px}.pl-[8px]{padding-left:8px}.pr-[15px]{padding-right:15px}.pr-[13px]{padding-right:13px}.pt-[100px]{padding-top:100px}.pt-[70px]{padding-top:70px}.pt-[17px]{padding-top:17px}.pt-[66px]{padding-top:66px}.pb-[100px]{padding-bottom:100px}.pt-[35px]{padding-top:35px}.pl-[24px]{padding-left:24px}.pt-[52px]{padding-top:52px}.pt-[7px]{padding-top:7px}.pb-[6px]{padding-bottom:6px}.pr-[10px]{padding-right:10px}.pl-[45px]{padding-left:45px}.pt-[16px]{padding-top:16px}.pb-[19px]{padding-bottom:19px}.pt-[28px]{padding-top:28px}.pt-[30px]{padding-top:30px}.pb-[60px]{padding-bottom:60px}.pl-[35px]{padding-left:35px}.pr-[12px]{padding-right:12px}.pb-[45px]{padding-bottom:45px}.pl-[43px]{padding-left:43px}.text-left{text-align:left}.text-center{text-align:center}.text-right{text-align:right}.align-middle{vertical-align:middle}.font-menlo{font-family:menlo}.font-sans{font-family:Circular Std,ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,segoe ui,Roboto,helvetica neue,Arial,noto sans,sans-serif,apple color emoji,segoe ui emoji,segoe ui symbol,noto color emoji}.text-[20px]{font-size:20px}.text-[36px]{font-size:36px}.text-[24px]{font-size:24px}.text-[15px]{font-size:15px}.text-[30px]{font-size:30px}.text-[14px]{font-size:14px}.text-[16px]{font-size:16px}.text-[18px]{font-size:18px}.text-[12px]{font-size:12px}.text-[32px]{font-size:32px}.text-[130px]{font-size:130px}.text-[40px]{font-size:40px}.text-[48px]{font-size:48px}.text-[13px]{font-size:13px}.text-[10px]{font-size:10px}.font-normal{font-weight:400}.font-bold{font-weight:700}.font-medium{font-weight:500}.uppercase{text-transform:uppercase}.lowercase{text-transform:lowercase}.capitalize{text-transform:capitalize}.not-italic{font-style:normal}.leading-[1.50]{line-height:1.5}.leading-[1.41]{line-height:1.41}.leading-[1.1429]{line-height:1.1429}.leading-[1.21]{line-height:1.21}.leading-[1.77]{line-height:1.77}.leading-[1.33]{line-height:1.33}.leading-none{line-height:1}.leading-[1.20]{line-height:1.2}.leading-[1.4]{line-height:1.4}.leading-[1.26]{line-height:1.26}.leading-2{line-height:2}.leading-[1.55]{line-height:1.55}.leading-[1.25]{line-height:1.25}.leading-[1.86]{line-height:1.86}.leading-[1.54]{line-height:1.54}.leading-[1.16]{line-height:1.16}.tracking-[0.2px]{letter-spacing:.2px}.tracking-tight{letter-spacing:-.01em}.text-green-1000{--tw-text-opacity:1;color:rgb(54 179 126\/var(--tw-text-opacity))}.text-yellow-400{--tw-text-opacity:1;color:rgb(250 173 19\/var(--tw-text-opacity))}.text-red-100{--tw-text-opacity:1;color:rgb(249 38 114\/var(--tw-text-opacity))}.text-gray-200{--tw-text-opacity:1;color:rgb(90 113 132\/var(--tw-text-opacity))}.text-blue-200{--tw-text-opacity:1;color:rgb(66 84 102\/var(--tw-text-opacity))}.text-black-100{--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.text-black-200{--tw-text-opacity:1;color:rgb(0 0 0\/var(--tw-text-opacity))}.text-red-200{--tw-text-opacity:1;color:rgb(255 121 198\/var(--tw-text-opacity))}.text-white{--tw-text-opacity:1;color:rgb(255 255 255\/var(--tw-text-opacity))}.text-yellow-100{--tw-text-opacity:1;color:rgb(255 201 31\/var(--tw-text-opacity))}.text-gray-400{--tw-text-opacity:1;color:rgb(36 41 46\/var(--tw-text-opacity))}.text-gray-300{--tw-text-opacity:1;color:rgb(179 186 197\/var(--tw-text-opacity))}.text-gray-500{--tw-text-opacity:1;color:rgb(232 232 232\/var(--tw-text-opacity))}.text-red-300{--tw-text-opacity:1;color:rgb(233 84 50\/var(--tw-text-opacity))}.text-green-600{--tw-text-opacity:1;color:rgb(22 163 74\/var(--tw-text-opacity))}.text-gray-100{--tw-text-opacity:1;color:rgb(230 236 242\/var(--tw-text-opacity))}.text-opacity-70{--tw-text-opacity:0.7}.underline{-webkit-text-decoration-line:underline;text-decoration-line:underline}.no-underline{-webkit-text-decoration-line:none;text-decoration-line:none}.shadow-[0px 2px 20px rgba(169, 169, 169, 0.16), 0 32px 46px -27px rgba(117, 117, 117, 0.2)]{--tw-shadow:0px 2px 20px rgba(169, 169, 169, 0.16), 0 32px 46px -27px rgba(117, 117, 117, 0.2);--tw-shadow-colored:0px 2px 20px var(--tw-shadow-color), 0 32px 46px -27px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-[0 2px 8px rgba(0, 0, 0, 0.16)]{--tw-shadow:0 2px 8px rgba(0, 0, 0, 0.16);--tw-shadow-colored:0 2px 8px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.shadow-[0 23.4255px 46.8511px rgba(0, 0, 0, 0.2)]{--tw-shadow:0 23.4255px 46.8511px rgba(0, 0, 0, 0.2);--tw-shadow-colored:0 23.4255px 46.8511px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.outline{outline-style:solid}.drop-shadow-[0 23.4255px 46.8511px rgba(0, 0, 0, 0.2)]{--tw-drop-shadow:drop-shadow(0 20px 13px rgb(0 0 0 \/ 0.03)) drop-shadow(0 8px 5px rgb(0 0 0 \/ 0.08));filter:var(--tw-filter)}.filter{filter:var(--tw-filter)}.transition-all{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.transition-opacity{transition-property:opacity;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}@font-face{font-family:menlo;font-weight:400;font-style:normal;src:local(\"Menlo Regular\"),local(\"menlo\"),url(\/fonts\/menlo\/Menlo-Regular.woff?e2vwe8)format('woff');font-display:swap}@font-face{font-family:circular std;font-weight:400;font-style:normal;src:local(\"Circular Std Book\"),local(\"Circular Std\"),url(\/fonts\/circularStd\/CircularStd-Book.woff?e2vwe8)format('woff'),url(\/fonts\/circularStd\/CircularStd-Book.woff2?e2vwe8)format('woff');font-display:swap}@font-face{font-family:circular std;font-weight:500;font-style:normal;src:local(\"Circular Std Medium\"),local(\"Circular Std\"),url(\/fonts\/circularStd\/CircularStd-Medium.woff?e2vwe8)format('woff'),url(\/fonts\/circularStd\/CircularStd-Medium.woff2?e2vwe8)format('woff');font-display:swap}@font-face{font-family:circular std;font-weight:700;font-style:normal;src:local(\"Circular Std Bold\"),local(\"Circular Std\"),url(\/fonts\/circularStd\/CircularStd-Bold.woff?e2vwe8)format('woff'),url(\/fonts\/circularStd\/CircularStd-Bold.woff2?e2vwe8)format('woff');font-display:swap}@font-face{font-family:circular std;font-weight:900;font-style:normal;src:local(\"Circular Std Black\"),local(\"Circular Std\"),url(\/fonts\/circularStd\/CircularStd-Black.woff?e2vwe8)format('woff'),url(\/fonts\/circularStd\/CircularStd-Black.woff2?e2vwe8)format('woff');font-display:swap}input:focus,textarea:focus,select:focus{outline:none!important;outline-offset:0!important;box-shadow:none!important}select{-webkit-appearance:none}.qa a{font-weight:700;-webkit-text-decoration-line:underline;text-decoration-line:underline}.bg-skew-yellow-b{background:#ffc91f}.bg-skew-white-b{background:#fff}.bg-skew-black{background:#0f0f0e}.bg-skew-black-t{background:#0f0f0e}.bg-skew-black-alt{background:#0f0f0e}.underline-yellow{background:#ffc91f}@media(min-width:768px){.bg-skew-yellow-b::after{content:'';position:absolute;bottom:0;left:-50%;right:-50%;top:-50%;transform:rotate(-8deg)skew(-8deg);background:#ffc91f;z-index:-1}.bg-skew-white-b:after{content:'';position:absolute;bottom:61px;left:-50%;right:-50%;top:-50%;transform:rotate(-8deg)skew(-8deg);background:#fff;z-index:-1}.bg-skew-black::after{content:'';position:absolute;bottom:0;left:-50%;right:-50%;top:0;transform:rotate(-8deg)skew(-8deg);background:#0f0f0e;z-index:-1}.bg-skew-black-t:after{content:'';position:absolute;bottom:-50%;left:-50%;right:-50%;top:0;transform:rotate(-8deg)skew(-8deg);background:#0f0f0e;z-index:-1}.bg-skew-black-alt::after{content:'';position:absolute;bottom:0;left:-50%;right:-50%;top:0;transform:rotate(8deg)skew(8deg);background:#0f0f0e;z-index:-1}.underline-yellow::after{content:'';background:#ffc91f;height:6px;left:0;right:0;bottom:0;position:absolute}.bg-skew-yellow-b,.bg-skew-white-b,.bg-skew-black,.bg-skew-black-t,.bg-skew-black-alt,.underline-yellow{background:0 0}}.price-plan.recommended{--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(255 255 255\/var(--tw-text-opacity))}.price-plan.recommended .txt-hidden{display:block}.price-plan.recommended .border-gray-600{border-color:transparent}.price-plan.recommended .btn-black-o{--tw-border-opacity:1;border-color:rgb(255 201 31\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity));transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.price-plan.recommended .btn-black-o:hover{--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity));background-color:#ffd127;border-color:#ffd127}.fixed-position{left:0!important;width:100%!important}.fixed-position .fixed-bar{width:100%!important;left:0!important;display:flex;flex-wrap:wrap;justify-content:space-between;--tw-bg-opacity:1;background-color:rgb(255 255 255\/var(--tw-bg-opacity));padding-top:12px;padding-bottom:12px;padding-left:12px;padding-right:12px;--tw-shadow:0 2px 8px rgba(0, 0, 0, 0.16);--tw-shadow-colored:0 2px 8px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.fixed-position .fixed-bar .btn{display:inline-block}.fixed-position h1{margin-top:10px;margin-bottom:10px;margin-right:15px;text-align:left;font-size:16px;font-weight:400}@media(min-width:1024px){.fixed-position h1{margin:0}.fixed-position h1{font-size:24px}}.terms-content{font-size:16px;line-height:1.5}@media(min-width:1024px){.terms-content{max-width:720px}}.terms-content h2{margin-bottom:32px;border-bottom-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));padding-top:49px;padding-bottom:31px;font-size:30px;line-height:1.33}@media(min-width:1024px){.terms-content h2{font-size:36px}}.terms-content p{margin-bottom:24px}.terms-content p a:hover{-webkit-text-decoration-line:underline;text-decoration-line:underline}.journey{position:relative;padding-left:60px}.journey:before{content:\"\";width:3px;position:absolute;top:0;bottom:0;left:20px;background:#0f0f0e}.journey .row+.row{margin-top:40px}.journey .row:after{content:\"\";width:40px;height:40px;background:#fff;border-radius:50%;border:3px solid #0f0f0e;position:absolute;top:50%;left:-58px;transform:translateY(-50%);z-index:2}.journey time{width:100%;display:flex;flex-wrap:wrap}.box{padding:15px 34px 15px 77px;position:relative;min-height:200px;display:flex;align-items:center;border-radius:8px;overflow:hidden;background:url(\/assets\/images\/bg-triangle.svg)no-repeat 0\/cover}.side-nav .active{border-left-width:2px;--tw-border-opacity:1;border-color:rgb(250 173 19\/var(--tw-border-opacity));padding-left:14px;font-weight:700}.nice-select{position:relative;outline:none}.custom-select .nice-select .current{display:flex;height:56px;width:100%;cursor:pointer;align-items:center;border-radius:4px;border-width:1px;--tw-border-opacity:1;border-color:rgb(179 186 197\/var(--tw-border-opacity));padding-top:10px;padding-bottom:10px;padding-left:20px;padding-right:55px;font-size:16px;font-weight:400;line-height:1.5;--tw-text-opacity:1;color:rgb(90 113 132\/var(--tw-text-opacity))}.custom-select .nice-select .current::-moz-placeholder{--tw-placeholder-opacity:1;color:rgb(179 186 197\/var(--tw-placeholder-opacity))}.custom-select .nice-select .current:-ms-input-placeholder{--tw-placeholder-opacity:1;color:rgb(179 186 197\/var(--tw-placeholder-opacity))}.custom-select .nice-select .current::placeholder{--tw-placeholder-opacity:1;color:rgb(179 186 197\/var(--tw-placeholder-opacity))}.custom-select .nice-select .current:focus{--tw-border-opacity:1;border-color:rgb(90 113 132\/var(--tw-border-opacity))}.custom-select .nice-select .current:after{content:\"\\e907\";font:6px\/1 icomoon;position:absolute;right:23px;top:50%;transform:translateY(-50%);color:#5a7184;transition:transform .15s linear}.custom-select .nice-select.open .current:after{transform:translateY(-50%)rotate(180deg)}.custom-select .nice-select.open .list{visibility:visible;opacity:1}.custom-select .nice-select .list{visibility:hidden;position:absolute;top:100%;left:0;right:0;margin-top:-1px;height:212px;overflow:auto;border-radius:4px;border-width:1px;--tw-border-opacity:1;border-color:rgb(179 186 197\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 255 255\/var(--tw-bg-opacity));padding-top:10px;padding-bottom:10px;padding-left:20px;padding-right:20px;font-size:16px;line-height:1.5;--tw-text-opacity:1;color:rgb(90 113 132\/var(--tw-text-opacity));opacity:0;transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.custom-select .nice-select .list li{cursor:pointer;padding-top:4px;padding-bottom:4px}.nice-select.style-1 .current{position:relative;display:flex;cursor:pointer;align-items:center;padding-right:18px;font-size:16px}.nice-select.style-1 .current:hover{--tw-text-opacity:1;color:rgb(255 201 31\/var(--tw-text-opacity))}.nice-select.style-1 .current:after{position:absolute;right:0;margin-left:8px;margin-top:1px;overflow:hidden;font-size:6px;line-height:1;top:50%;transform:translateY(-50%);content:\"\\e906\";font-family:icomoon;transition:transform .15s linear}.nice-select.style-1.open .current:after{transform:translateY(-50%)rotate(180deg)}.nice-select.style-1.open .list{visibility:visible;opacity:1}.nice-select.style-1 .list{visibility:hidden;position:absolute;right:0;top:100%;z-index:50;width:160px;overflow:auto;border-bottom-right-radius:6px;border-bottom-left-radius:6px;--tw-bg-opacity:1;background-color:rgb(27 37 56\/var(--tw-bg-opacity));padding:10px;opacity:0}.nice-select.style-1 .list li{cursor:pointer;padding-top:5px;padding-bottom:5px}.nice-select.style-1 .list li:hover{--tw-text-opacity:1;color:rgb(255 201 31\/var(--tw-text-opacity))}[data-more].active .icon-chevron03{transform:rotate(180deg)}.tabs li a.active{font-weight:700;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.tabs li a.active:after{content:\"\";position:absolute;bottom:-1px;left:0;right:0;height:2px;background:#faad13}.tab-content .tab{display:none}.tab-content .tab.active{display:block}.filter a.active{--tw-bg-opacity:1;background-color:rgb(255 255 255\/var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.filter label .bg{position:absolute;top:0;left:0;right:0;bottom:0;background:#e8e8e8;z-index:1;transition:background .3s linear}.filter label .txt{position:relative;z-index:2;color:#0f0f0e}.filter input[type=checkbox]{position:fixed;left:0;top:0;opacity:0;z-index:-1}.filter input[type=checkbox]:checked~.bg{background:#ffc91f}.paging li.active a{--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity))}.select-opener:after{position:absolute;right:16px;top:50%;font-size:10px;transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}@media(min-width:1024px){.select-opener:after{display:none}}.select-opener:after{transform:translateY(-50%);content:\"\\e906\";font:8px\/1 icomoon}.select-opener.active:after{transform:translateY(-50%)rotate(180deg)}.toc{z-index:3}.testimonial-slider-wrap .slider-nav{position:absolute;right:0;bottom:0;display:flex;height:92px;width:80px;flex-wrap:wrap;align-items:center;justify-content:flex-end;--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity))}.testimonial-slider-wrap .slider-nav .slick-next{margin-left:40px}.testimonial-slider-wrap .slider-nav .slick-arrow{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.testimonial-slider-wrap .slider-nav .slick-arrow:hover{--tw-text-opacity:1;color:rgb(255 201 31\/var(--tw-text-opacity))}.doc-nav li.active{font-weight:700;-webkit-text-decoration-line:underline;text-decoration-line:underline}.bg-yellow-full{position:relative}.bg-yellow-full:after{content:\"\";position:absolute;top:100%;left:0;right:0;bottom:-9999px;z-index:1;background:#ffc91f}.navbar .hasdrop .hasdrop-a.active,.navbar .hasdrop:hover .hasdrop-a{-webkit-text-decoration-line:underline;text-decoration-line:underline}.navbar .hasdrop-a:after{content:\"\\e908\";display:inline-block;vertical-align:middle;margin-left:8px;font:6px\/1 icomoon}.nav-drop{padding-left:18px;padding-right:18px;padding-top:18px}@media only screen and (min-width:1024px){.journey{padding-left:0}.journey time{justify-content:flex-end}.journey:before{top:30px;bottom:30px;left:50%;transform:translateX(-50%)}.journey .row:after{left:50%;transform:translate(-50%,-50%)}.journey .row+.row{margin-top:101px}.journey .row:nth-child(even){flex-direction:row-reverse}.journey .row:nth-child(even) .box{padding:15px 77px 15px 34px;background-image:url(\/assets\/images\/bg-triangle-r.svg);background-position:100% 0}.journey .row:nth-child(even) time{justify-content:flex-start}.box{min-height:183px}}@media only screen and (max-width:1023px){.navbar-wrap{top:100%;pointer-events:none}.navbar-wrap{position:absolute}.navbar-wrap{left:0}.navbar-wrap{right:0}.nav-active .navbar-wrap{pointer-events:auto}.navbar{transform:translateY(-101%);transition:transform .25s ease-in}.nav-active .navbar{transform:translateY(0)}.nav-active .nav-opener .icon-menu:before{content:\"\\e90c\";margin-right:3px}.testimonial-slider-wrap .slider-nav{position:static}.testimonial-slider-wrap .slider-nav{margin-top:20px}.testimonial-slider-wrap .slider-nav{height:auto}.testimonial-slider-wrap .slider-nav{width:100%}.sticky-aside.is-affixed .inner-wrapper-sticky h3{display:none}}@media only screen and (max-width:767px){.journey .row:after{top:-2px;transform:none}.box{background:#ffc91f;padding:15px}}@media only screen and (min-width:1024px){.sub-navs{display:block!important}.toc{position:static!important}.navbar .hasdrop{margin-left:-21px;margin-right:-21px}.navbar .hasdrop{margin-top:-14px;margin-bottom:-14px}.navbar .hasdrop:hover .nav-drop{visibility:visible}.navbar .hasdrop:hover .nav-drop{opacity:1}.navbar .hasdrop:hover .hasdrop-a{color:rgba(0,0,0,.47)}.navbar .hasdrop:hover .hasdrop-a{-webkit-text-decoration-line:none;text-decoration-line:none}.navbar .hasdrop .hasdrop-a.active{-webkit-text-decoration-line:none;text-decoration-line:none}.navbar .hasdrop-a{position:relative}.navbar .hasdrop-a{padding-top:14px;padding-bottom:14px}.navbar .hasdrop-a{padding-left:21px;padding-right:21px}.navbar .hasdrop-a{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.navbar .hasdrop-a:hover{color:rgba(0,0,0,.47)}.navbar .hasdrop-a:hover{-webkit-text-decoration-line:none;text-decoration-line:none}.nav-drop{display:block!important;height:auto!important}.nav-drop{visibility:hidden}.nav-drop{position:absolute}.nav-drop{top:100%}.nav-drop{left:15px}.nav-drop{min-width:214px}.nav-drop{border-radius:.375rem}.nav-drop{--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity))}.nav-drop{padding-top:18px;padding-bottom:18px}.nav-drop{padding-left:21px;padding-right:21px}.nav-drop{opacity:0}.nav-drop{transition-property:all;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:150ms}.nav-drop:after{content:\" \";position:absolute;right:calc(50% - 15px);top:-10px;border-top:none;border-right:15px solid transparent;border-left:15px solid transparent;border-bottom:15px solid #000}}@media only screen and (min-width:1280px){.nav-drop{left:21px}}@media only screen and (max-width:1023px){.toc .sub-navs{overflow-y:auto;max-height:calc( 100vh - 48px)}}.range-slider__slider{width:100%;margin:39px 0 30px;position:relative}.range-slider__slider:after{content:\"\";position:absolute;top:50%;left:0;right:0;transform:translateY(-50%);background:#e6ecf2;height:8px;border-radius:4px}.range-slider__slider .range-slider__slide{position:absolute;left:0;top:4px;z-index:2;height:8px;border-radius:4px;--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity))}.range-slider__slider .input-wrap{position:relative;z-index:2}.range-slider__slider input{-webkit-appearance:none;-moz-appearance:none;appearance:none;width:100%;height:8px;border-radius:4px;background:0 0}.range-slider__slider input::-moz-range-thumb{-webkit-appearance:none;-moz-appearance:none;appearance:none;width:24px;height:24px;background:#0f0f0e;cursor:pointer;border-radius:50%;border:none}.range-slider__slider input::-webkit-slider-thumb{-webkit-appearance:none;appearance:none;width:24px;height:24px;background:#0f0f0e;cursor:pointer;border-radius:50%;border:none}.custom-checkbox input{position:fixed;left:0;top:0;opacity:0;z-index:-1}.custom-checkbox{position:relative;display:inline-flex;align-items:center;margin:0;font-size:16px;line-height:1.5;color:#5a7184;padding-left:26px}.custom-checkbox .checkmark{width:16px;height:16px;position:absolute;left:0;top:2px;border:1px solid #5a7184;transition:border .3s linear,background .3s linear}.custom-checkbox .checkmark:after{content:\"\\e923\";font:8px\/1 icomoon;position:absolute;color:#5a7184;top:50%;left:50%;transform:translate(-50%,-50%);opacity:0;transition:opacity .3s linear}.custom-checkbox input:checked+.checkmark:after{opacity:1}.blog-post h1,.blog-post h2,.blog-post h3,.blog-post h4,.blog-post h5,.blog-post h6{margin-top:50px;margin-bottom:25px;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.blog-post h2,.blog-post h3,.blog-post h4,.blog-post h5,.blog-post h6{margin-top:-70px;padding-top:121px;outline:2px solid transparent;outline-offset:2px}.blog-post h2:first-of-type{margin-bottom:21px;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.blog-post h2:first-child,.blog-post h3:first-child{margin-top:0;padding-top:0}.blog-post p:not(:last-child),.blog-post .highlight:not(:last-child),.blog-post blockquote:not(:last-child),.blog-post pre:not(:last-child),.blog-post>ul:not(:last-child),.blog-post>ol:not(:last-child),.blog-post>table:not(:last-child){margin-bottom:20px}.blog-post code,.blog-post pre{border-radius:.375rem;font-size:.9em}.blog-post h1>code,.blog-post h2>code,.blog-post h3>code{font-size:inherit!important}.blog-post{--tw-text-opacity:1;color:rgb(52 66 84\/var(--tw-text-opacity))}.blog-post blockquote{border-left-width:4px;border-style:solid;--tw-border-opacity:1;border-left-color:rgb(255 201 31\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 236 199\/var(--tw-bg-opacity));padding:20px}.blog-post code:not(.hljs){margin-bottom:20px;--tw-bg-opacity:1;background-color:rgb(230 236 242\/var(--tw-bg-opacity));padding-left:4px;padding-right:4px;font-family:Circular Std,ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,segoe ui,Roboto,helvetica neue,Arial,noto sans,sans-serif,apple color emoji,segoe ui emoji,segoe ui symbol,noto color emoji;font-size:16px;--tw-text-opacity:1;color:rgb(33 43 69\/var(--tw-text-opacity))}.blog-post>ul{list-style-type:disc;padding-left:28px}.blog-post a{font-style:normal;-webkit-text-decoration-line:underline;text-decoration-line:underline}.blog-post a:hover{font-weight:700}.blog-post>ol{list-style-type:decimal;padding-left:28px}.blog-post img{vertical-align:middle;border:0}.blog-post .img svg,.blog-post .img img{margin:0;width:100%;height:auto}.blog-post .img{position:relative}.blog-post .img img{position:absolute;top:0;left:0}.blog-post table{border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.blog-post th,.blog-post td{border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));padding:10px}.toc{overflow-y:auto;font-size:16px}.toc>.toc-list{overflow:hidden;position:relative}.toc>.toc-list li{list-style:none;margin-bottom:10px}.toc-list{margin:0;padding-left:10px}a.toc-link{--tw-text-opacity:1;color:rgb(90 113 132\/var(--tw-text-opacity));height:100%}.is-collapsible{max-height:1000px;overflow:hidden;transition:all 300ms ease-in-out}.is-collapsed{max-height:0}.is-position-fixed{position:fixed!important;top:0}.is-active-link{--tw-text-opacity:1 !important;color:rgb(15 15 14\/var(--tw-text-opacity))!important}.toc-link::before{background-color:#eee;content:' ';display:inline-block;height:inherit;left:0;margin-top:-1px;position:absolute;width:2px}.toc-link:hover::before{background-color:#ffc91f}.is-active-link::before{background-color:#ffc91f}.nice-tab-content{display:none}.nice-tab-content.active{display:block}.client-library{border-radius:.375rem;padding:10px;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.client-library:hover{--tw-bg-opacity:1;background-color:rgb(255 236 199\/var(--tw-bg-opacity))}.client-library.active{--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity))}.doc-content{font-size:16px}.doc-content h2,.doc-content h3,.doc-content h4,.doc-content h5,.doc-content h6{font-weight:700;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity));outline:2px solid transparent;outline-offset:2px}.doc-content>h2,.doc-content>h3{margin-top:40px;margin-bottom:21px}.doc-content h2{font-size:36px}.doc-content p{margin-bottom:10px}.doc-content code:not(.hljs),.doc-content pre:not(.hljs){margin-bottom:10px;border-radius:4px;font-size:16px}.doc-content>pre>code{--tw-bg-opacity:1;background-color:rgb(44 58 87\/var(--tw-bg-opacity))}.doc-content{--tw-text-opacity:1;color:rgb(52 66 84\/var(--tw-text-opacity))}.doc-content blockquote{margin-bottom:10px;border-left-width:4px;border-style:solid;--tw-border-opacity:1;border-left-color:rgb(255 201 31\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 236 199\/var(--tw-bg-opacity));padding:20px}.doc-content p>code:not(.hljs),.doc-content li>code:not(.hljs),.doc-content td>code:not(.hljs){--tw-bg-opacity:1;background-color:rgb(230 236 242\/var(--tw-bg-opacity));--tw-text-opacity:1;color:rgb(33 43 69\/var(--tw-text-opacity));padding-left:4px;padding-right:4px;font-family:Circular Std,ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,segoe ui,Roboto,helvetica neue,Arial,noto sans,sans-serif,apple color emoji,segoe ui emoji,segoe ui symbol,noto color emoji}code.hljs{--tw-bg-opacity:1;background-color:rgb(44 58 87\/var(--tw-bg-opacity));border-radius:4px;font-size:14px}.doc-content>ul{margin-bottom:10px;list-style-type:disc;padding-left:28px}.doc-content a{font-style:normal;-webkit-text-decoration-line:underline;text-decoration-line:underline}.doc-content a:hover{font-weight:700}.doc-content>ol{margin-bottom:10px;list-style-type:decimal;padding-left:28px}.doc-content img{vertical-align:middle;border:0}.doc-content .img svg,.doc-content .img img{margin:0;width:100%;height:auto}.doc-content .img{position:relative}.doc-content .img img{position:absolute;top:0;left:0}.doc-content table{margin-bottom:10px;border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.doc-content th,.doc-content td{border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));padding:10px}.doc-row{display:flex;flex-wrap:wrap;border-bottom-width:1px;--tw-border-opacity:1;border-color:rgb(217 218 219\/var(--tw-border-opacity));padding-bottom:48px;padding-top:47px;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.doc-left{width:100%;font-size:16px}.doc-left h2,.doc-left h3,.doc-left h4,.doc-left h5,.doc-left h6{font-weight:700;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity));outline:2px solid transparent;outline-offset:2px}.doc-left>h2,.doc-left>h3{margin-top:40px;margin-bottom:21px}.doc-left h2{font-size:36px}.doc-left p{margin-bottom:10px}.doc-left code:not(.hljs),.doc-left pre:not(.hljs){margin-bottom:10px;border-radius:4px;font-size:16px}.doc-left>pre>code{--tw-bg-opacity:1;background-color:rgb(44 58 87\/var(--tw-bg-opacity))}.doc-left{--tw-text-opacity:1;color:rgb(52 66 84\/var(--tw-text-opacity))}.doc-left blockquote{margin-bottom:10px;border-left-width:4px;border-style:solid;--tw-border-opacity:1;border-left-color:rgb(255 201 31\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 236 199\/var(--tw-bg-opacity));padding:20px}.doc-left p>code:not(.hljs),.doc-left li>code:not(.hljs),.doc-left td>code:not(.hljs){}.doc-left>ul{margin-bottom:10px;list-style-type:disc;padding-left:28px}.doc-left a{font-style:normal;-webkit-text-decoration-line:underline;text-decoration-line:underline}.doc-left a:hover{font-weight:700}.doc-left>ol{margin-bottom:10px;list-style-type:decimal;padding-left:28px}.doc-left img{vertical-align:middle;border:0}.doc-left .img svg,.doc-left .img img{margin:0;width:100%;height:auto}.doc-left .img{position:relative}.doc-left .img img{position:absolute;top:0;left:0}.doc-left table{margin-bottom:10px;border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.doc-left th,.doc-left td{border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));padding:10px}@media(min-width:1024px){.doc-left{width:50%}}.doc-right{width:100%;padding-top:50px;font-size:16px}.doc-right h2,.doc-right h3,.doc-right h4,.doc-right h5,.doc-right h6{font-weight:700;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity));outline:2px solid transparent;outline-offset:2px}.doc-right>h2,.doc-right>h3{margin-top:40px;margin-bottom:21px}.doc-right h2{font-size:36px}.doc-right p{margin-bottom:10px}.doc-right code:not(.hljs),.doc-right pre:not(.hljs){margin-bottom:10px;border-radius:4px;font-size:16px}.doc-right>pre>code{--tw-bg-opacity:1;background-color:rgb(44 58 87\/var(--tw-bg-opacity))}.doc-right{--tw-text-opacity:1;color:rgb(52 66 84\/var(--tw-text-opacity))}.doc-right blockquote{margin-bottom:10px;border-left-width:4px;border-style:solid;--tw-border-opacity:1;border-left-color:rgb(255 201 31\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 236 199\/var(--tw-bg-opacity));padding:20px}.doc-right p>code:not(.hljs),.doc-right li>code:not(.hljs),.doc-right td>code:not(.hljs){}.doc-right>ul{margin-bottom:10px;list-style-type:disc;padding-left:28px}.doc-right a{font-style:normal;-webkit-text-decoration-line:underline;text-decoration-line:underline}.doc-right a:hover{font-weight:700}.doc-right>ol{margin-bottom:10px;list-style-type:decimal;padding-left:28px}.doc-right img{vertical-align:middle;border:0}.doc-right .img svg,.doc-right .img img{margin:0;width:100%;height:auto}.doc-right .img{position:relative}.doc-right .img img{position:absolute;top:0;left:0}.doc-right table{margin-bottom:10px;border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.doc-right th,.doc-right td{border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));padding:10px}@media(min-width:1024px){.doc-right{width:50%}.doc-right{padding-left:30px}}@media(min-width:1440px){.doc-right{padding-left:32px}}.doc-full{width:100%;font-size:16px}.doc-full h2,.doc-full h3,.doc-full h4,.doc-full h5,.doc-full h6{font-weight:700;--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity));outline:2px solid transparent;outline-offset:2px}.doc-full>h2,.doc-full>h3{margin-top:40px;margin-bottom:21px}.doc-full h2{font-size:36px}.doc-full p{margin-bottom:10px}.doc-full code:not(.hljs),.doc-full pre:not(.hljs){margin-bottom:10px;border-radius:4px;font-size:16px}.doc-full>pre>code{--tw-bg-opacity:1;background-color:rgb(44 58 87\/var(--tw-bg-opacity))}.doc-full{--tw-text-opacity:1;color:rgb(52 66 84\/var(--tw-text-opacity))}.doc-full blockquote{margin-bottom:10px;border-left-width:4px;border-style:solid;--tw-border-opacity:1;border-left-color:rgb(255 201 31\/var(--tw-border-opacity));--tw-bg-opacity:1;background-color:rgb(255 236 199\/var(--tw-bg-opacity));padding:20px}.doc-full p>code:not(.hljs),.doc-full li>code:not(.hljs),.doc-full td>code:not(.hljs){}.doc-full>ul{margin-bottom:10px;list-style-type:disc;padding-left:28px}.doc-full a{font-style:normal;-webkit-text-decoration-line:underline;text-decoration-line:underline}.doc-full a:hover{font-weight:700}.doc-full>ol{margin-bottom:10px;list-style-type:decimal;padding-left:28px}.doc-full img{vertical-align:middle;border:0}.doc-full .img svg,.doc-full .img img{margin:0;width:100%;height:auto}.doc-full .img{position:relative}.doc-full .img img{position:absolute;top:0;left:0}.doc-full table{margin-bottom:10px;border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.doc-full th,.doc-full td{border-width:1px;--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity));padding:10px}.snippet-copy{cursor:pointer}.snippet-copy>span:hover{font-weight:900}.doc-section-splitter{margin-top:50px}.toc-doc{font-size:16px}.toc-doc .toc-list li{margin-bottom:0;padding-top:4px;padding-bottom:4px}.toc-doc ol ol{padding-left:14px}.alternatives a:not([class]){font-weight:700;-webkit-text-decoration-line:underline;text-decoration-line:underline}.alternatives a:not([class]):hover{-webkit-text-decoration-line:none;text-decoration-line:none}.last-of-type\\:border-0:last-of-type{border-width:0}.hover\\:border-2:hover{border-width:2px}.hover\\:border-yellow-100:hover{--tw-border-opacity:1;border-color:rgb(255 201 31\/var(--tw-border-opacity))}.hover\\:border-gray-300:hover{--tw-border-opacity:1;border-color:rgb(179 186 197\/var(--tw-border-opacity))}.hover\\:bg-yellow-100:hover{--tw-bg-opacity:1;background-color:rgb(255 201 31\/var(--tw-bg-opacity))}.hover\\:bg-white:hover{--tw-bg-opacity:1;background-color:rgb(255 255 255\/var(--tw-bg-opacity))}.hover\\:bg-opacity-90:hover{--tw-bg-opacity:0.9}.hover\\:font-bold:hover{font-weight:700}.hover\\:text-yellow-100:hover{--tw-text-opacity:1;color:rgb(255 201 31\/var(--tw-text-opacity))}.hover\\:text-black-100:hover{--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.hover\\:text-gray-700:hover{--tw-text-opacity:1;color:rgb(228 231 236\/var(--tw-text-opacity))}.hover\\:underline:hover{-webkit-text-decoration-line:underline;text-decoration-line:underline}.hover\\:no-underline:hover{-webkit-text-decoration-line:none;text-decoration-line:none}.hover\\:shadow-2xl:hover{--tw-shadow:0 25px 50px -12px rgb(0 0 0 \/ 0.25);--tw-shadow-colored:0 25px 50px -12px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}@media(min-width:375px){.xs\\:text-[18px]{font-size:18px}}@media(min-width:768px){.sm\\:order-none{order:0}.sm\\:-m-[1px]{margin:-1px}.sm\\:my-[6px]{margin-top:6px;margin-bottom:6px}.sm\\:-mx-[6px]{margin-left:-6px;margin-right:-6px}.sm\\:mb-[0]{margin-bottom:0}.sm\\:mb-[40px]{margin-bottom:40px}.sm\\:mb-[27px]{margin-bottom:27px}.sm\\:mb-[22px]{margin-bottom:22px}.sm\\:mb-[16px]{margin-bottom:16px}.sm\\:mb-[32px]{margin-bottom:32px}.sm\\:mb-[33px]{margin-bottom:33px}.sm\\:mb-[20px]{margin-bottom:20px}.sm\\:mr-[30px]{margin-right:30px}.sm\\:mt-[50px]{margin-top:50px}.sm\\:block{display:block}.sm\\:flex{display:flex}.sm\\:h-[40px]{height:40px}.sm\\:h-auto{height:auto}.sm\\:w-[1px]\\\/2{width:50%}.sm\\:w-[10px]\\\/12{width:83.333333%}.sm\\:w-auto{width:auto}.sm\\:w-[40px]{width:40px}.sm\\:w-[calc(50% - 28px)]{width:calc(50% - 28px)}.sm\\:w-[1px]\\\/3{width:33.333333%}.sm\\:w-[281px]{width:281px}.sm\\:w-[100px]{width:100px}.sm\\:w-[50px]{width:50px}.sm\\:max-w-[490px]{max-width:490px}.sm\\:flex-1{flex:1}.sm\\:justify-start{justify-content:flex-start}.sm\\:p-[32px]{padding:32px}.sm\\:py-[60px]{padding-top:60px;padding-bottom:60px}.sm\\:px-[24px]{padding-left:24px;padding-right:24px}.sm\\:py-[70px]{padding-top:70px;padding-bottom:70px}.sm\\:pt-[102px]{padding-top:102px}.sm\\:pb-[80px]{padding-bottom:80px}.sm\\:pt-[91px]{padding-top:91px}.sm\\:pb-[71px]{padding-bottom:71px}.sm\\:pb-[60px]{padding-bottom:60px}.sm\\:pt-[41px]{padding-top:41px}.sm\\:pb-[70px]{padding-bottom:70px}.sm\\:pr-[12px]{padding-right:12px}.sm\\:pt-[100px]{padding-top:100px}.sm\\:text-left{text-align:left}.sm\\:text-right{text-align:right}.sm\\:text-[40px]{font-size:40px}.sm\\:text-[24px]{font-size:24px}.sm\\:text-[48px]{font-size:48px}.sm\\:text-[20px]{font-size:20px}.sm\\:text-[18px]{font-size:18px}.sm\\:text-[16px]{font-size:16px}.sm\\:text-[60px]{font-size:60px}}@media(min-width:1024px){.md\\:order-1{order:1}.md\\:order-3{order:3}.md\\:order-first{order:-9999}.md\\:-m-[28px]{margin:-28px}.md\\:m-[0]{margin:0}.md\\:mx-[0]{margin-left:0;margin-right:0}.md\\:-mx-[21px]{margin-left:-21px;margin-right:-21px}.md\\:-mx-[60px]{margin-left:-60px;margin-right:-60px}.md\\:-mx-[36px]{margin-left:-36px;margin-right:-36px}.md\\:mt-[120px]{margin-top:120px}.md\\:mb-[80px]{margin-bottom:80px}.md\\:mb-[0]{margin-bottom:0}.md\\:mr-[60px]{margin-right:60px}.md\\:mr-[40px]{margin-right:40px}.md\\:mb-[67px]{margin-bottom:67px}.md\\:mb-[42px]{margin-bottom:42px}.md\\:mb-[66px]{margin-bottom:66px}.md\\:mb-[170px]{margin-bottom:170px}.md\\:-mr-[60px]{margin-right:-60px}.md\\:mb-[196px]{margin-bottom:196px}.md\\:mb-[115px]{margin-bottom:115px}.md\\:mb-[75px]{margin-bottom:75px}.md\\:mb-[74px]{margin-bottom:74px}.md\\:mb-[54px]{margin-bottom:54px}.md\\:mb-[45px]{margin-bottom:45px}.md\\:mt-[0]{margin-top:0}.md\\:block{display:block}.md\\:inline-block{display:inline-block}.md\\:flex{display:flex}.md\\:hidden{display:none}.md\\:h-[48px]{height:48px}.md\\:w-[22px]\\.3p{width:22.3%}.md\\:w-[22px]{width:22px}.md\\:w-[53px]\\.3p{width:53.3%}.md\\:w-[53px]{width:53px}.md\\:w-[3px]\\\/12{width:25%}.md\\:w-[9px]\\\/12{width:75%}.md\\:w-[6px]\\\/12{width:50%}.md\\:w-[1px]\\\/4{width:25%}.md\\:w-[1px]\\\/2{width:50%}.md\\:w-[500px]{width:500px}.md\\:w-full{width:100%}.md\\:w-auto{width:auto}.md\\:min-w-[0]{min-width:0}.md\\:max-w-none{max-width:none}.md\\:max-w-[380px]{max-width:380px}.md\\:max-w-[434px]{max-width:434px}.md\\:flex-1{flex:1}.md\\:flex-row{flex-direction:row}.md\\:items-center{align-items:center}.md\\:justify-start{justify-content:flex-start}.md\\:justify-between{justify-content:space-between}.md\\:overflow-visible{overflow:visible}.md\\:border-transparent{border-color:transparent}.md\\:border-black-100{--tw-border-opacity:1;border-color:rgb(15 15 14\/var(--tw-border-opacity))}.md\\:bg-transparent{background-color:transparent}.md\\:p-[28px]{padding:28px}.md\\:p-[0]{padding:0}.md\\:px-[41px]{padding-left:41px;padding-right:41px}.md\\:py-[38px]{padding-top:38px;padding-bottom:38px}.md\\:px-[15px]{padding-left:15px;padding-right:15px}.md\\:py-[22px]{padding-top:22px;padding-bottom:22px}.md\\:px-[8px]{padding-left:8px;padding-right:8px}.md\\:px-[60px]{padding-left:60px;padding-right:60px}.md\\:py-[100px]{padding-top:100px;padding-bottom:100px}.md\\:px-[48px]{padding-left:48px;padding-right:48px}.md\\:py-[80px]{padding-top:80px;padding-bottom:80px}.md\\:px-[0]{padding-left:0;padding-right:0}.md\\:py-[91px]{padding-top:91px;padding-bottom:91px}.md\\:px-[36px]{padding-left:36px;padding-right:36px}.md\\:pt-[111px]{padding-top:111px}.md\\:pb-[3px]{padding-bottom:3px}.md\\:pb-[30px]{padding-bottom:30px}.md\\:pl-[30px]{padding-left:30px}.md\\:pb-[0]{padding-bottom:0}.md\\:pt-[180px]{padding-top:180px}.md\\:pb-[113px]{padding-bottom:113px}.md\\:pt-[210px]{padding-top:210px}.md\\:pt-[156px]{padding-top:156px}.md\\:pb-[127px]{padding-bottom:127px}.md\\:pt-[220px]{padding-top:220px}.md\\:pb-[220px]{padding-bottom:220px}.md\\:pt-[162px]{padding-top:162px}.md\\:pb-[200px]{padding-bottom:200px}.md\\:pl-[64px]{padding-left:64px}.md\\:pb-[20px]{padding-bottom:20px}.md\\:pb-[82px]{padding-bottom:82px}.md\\:pb-[100px]{padding-bottom:100px}.md\\:text-left{text-align:left}.md\\:text-center{text-align:center}.md\\:text-right{text-align:right}.md\\:text-[18px]{font-size:18px}.md\\:text-[42px]{font-size:42px}.md\\:text-[56px]{font-size:56px}.md\\:text-[62px]{font-size:62px}.md\\:text-[14px]{font-size:14px}.md\\:text-[24px]{font-size:24px}.md\\:text-[36px]{font-size:36px}.md\\:text-black-100{--tw-text-opacity:1;color:rgb(15 15 14\/var(--tw-text-opacity))}.md\\:hover\\:bg-black-100:hover{--tw-bg-opacity:1;background-color:rgb(15 15 14\/var(--tw-bg-opacity))}.md\\:hover\\:text-white:hover{--tw-text-opacity:1;color:rgb(255 255 255\/var(--tw-text-opacity))}}@media(min-width:1280px){.lg\\:-mx-[23px]{margin-left:-23px;margin-right:-23px}.lg\\:-mx-[8px]{margin-left:-8px;margin-right:-8px}.lg\\:mr-[86px]{margin-right:86px}.lg\\:mr-[90px]{margin-right:90px}.lg\\:mr-[83px]{margin-right:83px}.lg\\:w-[8px]\\\/12{width:66.666667%}.lg\\:w-[608px]{width:608px}.lg\\:w-[52%]{width:52%}.lg\\:w-[26%]{width:26%}.lg\\:w-[22%]{width:22%}.lg\\:max-w-[434px]{max-width:434px}.lg\\:flex-nowrap{flex-wrap:nowrap}.lg\\:justify-between{justify-content:space-between}.lg\\:px-[23px]{padding-left:23px;padding-right:23px}.lg\\:px-[36px]{padding-left:36px;padding-right:36px}.lg\\:px-[21px]{padding-left:21px;padding-right:21px}.lg\\:px-[12px]{padding-left:12px;padding-right:12px}.lg\\:px-[8px]{padding-left:8px;padding-right:8px}.lg\\:pt-[80px]{padding-top:80px}.lg\\:text-[56px]{font-size:56px}.lg\\:text-[16px]{font-size:16px}.lg\\:text-[24px]{font-size:24px}}@media(min-width:1440px){.xl\\:w-[2px]\\\/12{width:16.666667%}.xl\\:w-[10px]\\\/12{width:83.333333%}.xl\\:pl-[29px]{padding-left:29px}.xl\\:pr-[105px]{padding-right:105px}.xl\\:pl-[32px]{padding-left:32px}.xl\\:pr-[34px]{padding-right:34px}.xl\\:text-[16px]{font-size:16px}}\n <\/style>\n<link rel=\"preload\" href=\"https:\/\/www.scrapingbee.com\/main.min.a09f1f7d5c32eba3a323bc3c39fca98dc62a83bad52faf6e0c62e7c5285cab6a.css\" as=\"style\" onload=\"this.onload=null;this.rel='stylesheet'\">\n<noscript><link rel=\"stylesheet\" href=\"https:\/\/www.scrapingbee.com\/main.min.a09f1f7d5c32eba3a323bc3c39fca98dc62a83bad52faf6e0c62e7c5285cab6a.css\"><\/noscript>\n<script type=\"application\/ld+json\">\n {\n \"@context\": \"http:\/\/schema.org\",\n \"@type\": \"WebSite\",\n \"name\": \"ScrapingBee, the best web scraping API.\",\n \"url\": \"https:\/\/www.scrapingbee.com\/\",\n \"description\": \"ScrapingBee is a Web Scraping API that handles proxies and Headless browser for you, so you can focus on extracting the data you want, and nothing else.\",\n \"thumbnailUrl\": \"https:\/\/www.scrapingbee.com\/favico.png\"\n }\n<\/script>\n<script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Organization\",\n \"description\": \"The easiest web scraping API on the web. We handles headless browers and rotate proxies for you.\",\n \"address\": {\n \"@type\": \"PostalAddress\",\n \"addressLocality\": \"Paris, France\",\n \"postalCode\": \"F-75008\",\n \"streetAddress\": \"66 Avenue des Champs Elys\u00e9es, OCB Business Center 4\"\n },\n \"email\": \"hello(at)scrapingbee.com\",\n \"member\": [\n {\n \"@type\": \"Organization\"\n },\n {\n \"@type\": \"Organization\"\n }\n ],\n \"alumni\": [\n {\n \"@type\": \"Person\",\n \"name\": \"Pierre de Wulf\"\n },\n {\n \"@type\": \"Person\",\n \"name\": \"Kevin Sahin\"\n }\n ],\n \"name\": \"ScrapingBee\"\n }\n <\/script>\n<\/head>\n <body>\n <!-- Google Tag Manager (noscript) -->\n <noscript><iframe src=\"https:\/\/www.googletagmanager.com\/ns.html?id=GTM-P4H32H5J\"\n height=\"0\" width=\"0\" style=\"display:none;visibility:hidden\"><\/iframe><\/noscript>\n <!-- End Google Tag Manager (noscript) -->\n <div id=\"wrapper\">\n <header class=\"absolute top-[0] right-[0] left-[0] py-[20px] bg-yellow-100 md:py-[38px] z-[9]\">\n <div class=\"container\">\n <div class=\"flex items-center\">\n <div class=\"w-[160px] md:mr-[60px] lg:mr-[90px]\">\n <a href=\"https:\/\/www.scrapingbee.com\/\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/logo.svg\" alt=\"ScrapingBee logo\" height=\"26\" width=\"160\">\n <\/a>\n <\/div>\n <span class=\"absolute top-[0] right-[0] mr-[20px] cursor-pointer nav-opener md:hidden mt-[19px]\"><i class=\"icon-menu\"><\/i><\/span>\n <div class=\"overflow-hidden navbar-wrap md:overflow-visible md:flex-1\">\n <nav class=\"px-[20px] py-[20px] text-white navbar md:p-[0] md:flex md:items-center md:justify-between text-[16px] leading-[1.20] bg-black-100 md:bg-transparent md:text-black-100\">\n <ul class=\"flex justify-between items-center pb-[20px] border-b border-blue-200 -mx-[21px] md:justify-start md:border-transparent md:pb-[0] mb-[30px] md:mb-[0]\">\n <li class=\"px-[15px] lg:px-[21px]\"><a href=\"https:\/\/dashboard.scrapingbee.com\/account\/login\" class=\"block hover:underline\">Login<\/a><\/li>\n <li class=\"px-[15px] lg:px-[21px]\"><a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"h-[40px] text-white border-white transition-all btn btn-black-o text-[16px] px-[21px] md:h-[48px] md:border-black-100 md:text-black-100 hover:bg-white md:hover:bg-black-100 hover:text-black-100 md:hover:text-white\">Sign Up<\/a><\/li>\n <\/ul>\n <ul class=\"md:flex md:order-first md:items-center md:-mx-[21px]\">\n <li class=\"relative mb-[20px] md:px-[15px] lg:px-[21px] md:mb-[0]\"><a href=\"https:\/\/www.scrapingbee.com\/#pricing\" class=\"block hover:underline\">Pricing<\/a><\/li>\n <li class=\"relative mb-[20px] md:px-[15px] lg:px-[21px] md:mb-[0]\"><a href=\"https:\/\/www.scrapingbee.com\/#faq\" class=\"block hover:underline\">FAQ<\/a><\/li>\n <li class=\"relative mb-[20px] md:px-[15px] lg:px-[21px] md:mb-[0]\"><a href=\"https:\/\/www.scrapingbee.com\/blog\/\" class=\"block hover:underline\">Blog<\/a><\/li>\n <li class=\"relative mb-[20px] md:px-[15px] lg:px-[21px] md:mb-[0]\">\n <a href=\"#\" class=\"block hover:underline\">Other Features<\/a>\n <ul class=\"nav-drop\">\n <li><a href=\"https:\/\/www.scrapingbee.com\/features\/screenshot\/\" class=\"text-white hover:underline\">Screenshots<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/google\/\" class=\"text-white hover:underline\">Google search API<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/chatgpt\/\" class=\"text-white hover:underline\">ChatGPT API<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/amazon\/\" class=\"text-white hover:underline\">Amazon API<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/youtube\/\" class=\"text-white hover:underline\">YouTube API<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/walmart\/\" class=\"text-white hover:underline\">Walmart API<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/data-extraction\/\" class=\"text-white hover:underline\">Data extraction<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/javascript-scenario\/\" class=\"text-white hover:underline\">JavaScript scenario<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/features\/make\/\" class=\"text-white hover:underline\">No code web scraping<\/a><\/li>\n <\/ul>\n <\/li>\n <li class=\"relative mb-[20px] md:px-[15px] lg:px-[21px] md:mb-[0]\">\n <a href=\"#\" class=\"block hover:underline\">Developers<\/a>\n <ul class=\"nav-drop\">\n <li><a href=\"https:\/\/www.scrapingbee.com\/tutorials\" class=\"text-white hover:underline\">Tutorials<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/www.scrapingbee.com\/documentation\" class=\"text-white hover:underline\">Documentation<\/a><\/li>\n <li class=\"mt-[12px]\"><a href=\"https:\/\/help.scrapingbee.com\/en\/\" target=\"_blank\" class=\"text-white hover:underline\">Knowledge Base<\/a><\/li>\n <\/ul>\n <\/li>\n <\/ul>\n <\/nav>\n <\/div>\n <\/div>\n <\/div>\n<\/header>\n <div id=\"content\">\n<div class=\"overflow-hidden\">\n<section class=\"relative bg-skew-yellow-b pt-[66px] md:pt-[156px] pb-[100px] md:pb-[220px] z-1\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px] mb-[50px] sm:mb-[0]\">\n <div class=\"max-w-[508px] text-[20px] md:text-[24px] leading-[1.50] pt-[35px]\">\n <h1 class=\"mb-[33px]\">The Web Scraping API for Busy Developers<\/h1>\n <p class=\"mb-[45px]\">Our Web Scraping API handles headless browsers and rotates proxies for you.<\/p>"},{"title":"99Acres Scraper API - Easy Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/99acres-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/99acres-scraper-api\/","description":{}},{"title":"Acceptable Use Policy","link":"https:\/\/www.scrapingbee.com\/acceptable-use-policy\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/acceptable-use-policy\/","description":"<p>The present Acceptable Use Policy (the \u201cAUP\u201d) covers the Services provided under legitimate and legal purposes only and any ongoing Agreement. Capitalized terms in this AUP have the same meaning as in the General Conditions in which they are defined.<\/p>\n<p>The AUP intends to protect Provider, Users, and more generally internet users from illegal, fraudulent, or abusive activities. As such, any access or use of the Services for illegal, fraudulent, or abusive activities is strictly prohibited. Any such suspected access or use will be investigated.<\/p>"},{"title":"Adidas Scraper API - Easy Sign Up + Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/adidas-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/adidas-api\/","description":{}},{"title":"Affiliate Program","link":"https:\/\/www.scrapingbee.com\/affiliates\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/affiliates\/","description":"<p>Earn commissions by promoting ScrapingBee<\/p>\n<p>Welcome to the ScrapingBee affiliate program. ScrapingBee is a web scraping API. We help developers and tech companies scrape the web without having to deal with rotating proxies and headless browsers.<\/p>\n<p>We are a Software as a Service company, meaning our customers pay us a monthly fee to access the service. The price depends on the volume, and we have three tiers: $29 \/ $99 \/ $249 per month.<\/p>"},{"title":"AI Web Scraping API","link":"https:\/\/www.scrapingbee.com\/features\/ai-web-scraping-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/ai-web-scraping-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Effortlessly extract data with our AI scraper API. Simplify data extraction, get clean JSON outputs and adapt to page changes. Try it free today!\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">AI Web Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Effortlessly extract data with our AI scraper API. Simplify data extraction, get clean JSON outputs and adapt to page changes. Try it free today!<\/p>"},{"title":"Airbnb Scraper API - Quick Signup + Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/airbnb-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/airbnb-api\/","description":{}},{"title":"Alibaba Scraper API Tool - Free Credits & Easy Setup","link":"https:\/\/www.scrapingbee.com\/scrapers\/alibaba-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/alibaba-api\/","description":{}},{"title":"AliExpress Scraper API with Credits - Easy & Simple Tool","link":"https:\/\/www.scrapingbee.com\/scrapers\/aliexpress-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/aliexpress-api\/","description":{}},{"title":"Allrecipes Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/allrecipes-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/allrecipes-api\/","description":{}},{"title":"Amazon API","link":"https:\/\/www.scrapingbee.com\/documentation\/amazon\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/amazon\/","description":"<p>Our Amazon API allows you to scrape Amazon search results and product details in realtime.<\/p>\n<p>We provide two endpoints:<\/p>\n<ul>\n<li><strong>Search endpoint<\/strong> (<code>\/api\/v1\/amazon\/search<\/code>) - Fetch Amazon search results<\/li>\n<li><strong>Product endpoint<\/strong> (<code>\/api\/v1\/amazon\/product<\/code>) - Fetch structured Amazon product details<\/li>\n<\/ul>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"amazon-product-api\">Amazon Product API<\/h2>\n<h3 id=\"quick-start\">Quick start<\/h3>\n<p>To scrape Amazon product details, you only need two things:<\/p>\n<ul>\n<li>your API key, available <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/manage\/api_key\" >here<\/a><\/li>\n<li>a product ASIN (<a href=\"#query\" >learn more about ASIN<\/a>)<\/li>\n<\/ul>\n<p>Then, simply do this.<\/p>\n\n\n\n\n\n \n\n\n\n \n\n \n \n \n\n<div class=\"p-1 rounded mb-6 bg-[#F4F0F0] border border-[#1A1414]\/10 text-[16px] leading-[1.50]\" data-tabs-id=\"dbac6bf1ced5ca4cf8c5d8d8c24ec44f\">\n\n <div class=\"md:pl-[30px] xl:pl-[32px] flex items-center justify-end gap-3 py-[10px] px-[17px]\" x-data=\"{ \n open: false, \n selectedLibrary: 'python-dbac6bf1ced5ca4cf8c5d8d8c24ec44f',\n libraries: [\n { name: 'Python', value: 'python-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-python.svg', width: 32, height: 32 },\n { name: 'CLI', value: 'cli-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-cli.svg', width: 32, height: 32, isNew: true },\n { name: 'cURL', value: 'curl-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-curl.svg', width: 48, height: 32 },\n { name: 'Go', value: 'go-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-go.svg', width: 32, height: 32 },\n { name: 'Java', value: 'java-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-java.svg', width: 32, height: 32 },\n { name: 'NodeJS', value: 'node-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-node.svg', width: 26, height: 26 },\n { name: 'PHP', value: 'php-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-php.svg', width: 32, height: 32 },\n { name: 'Ruby', value: 'ruby-dbac6bf1ced5ca4cf8c5d8d8c24ec44f', icon: '\/images\/icons\/icon-ruby.svg', width: 32, height: 32 }\n ],\n selectLibrary(value, isGlobal = false) {\n this.selectedLibrary = value;\n this.open = false;\n \/\/ Trigger tab switching for this specific instance\n \/\/ Use Alpine's $el to find the container\n const container = $el.closest('[data-tabs-id]');\n if (container) {\n container.querySelectorAll('.nice-tab-content').forEach(tab => {\n tab.classList.remove('active');\n });\n const selectedTab = container.querySelector('#' + value);\n if (selectedTab) {\n selectedTab.classList.add('active');\n }\n }\n \/\/ Individual snippet selectors should NOT trigger global changes\n \/\/ Only the global selector at the top should change all snippets\n },\n getSelectedLibrary() {\n return this.libraries.find(lib => lib.value === this.selectedLibrary) || this.libraries[0];\n },\n init() {\n \/\/ Listen for global language changes\n window.addEventListener('languageChanged', (e) => {\n const globalLang = e.detail.language;\n const matchingLib = this.libraries.find(lib => lib.value.startsWith(globalLang + '-'));\n if (matchingLib) {\n this.selectLibrary(matchingLib.value, true);\n }\n });\n \/\/ Initialize from global state if available\n const globalLang = window.globalSelectedLanguage || 'python';\n const matchingLib = this.libraries.find(lib => lib.value.startsWith(globalLang + '-'));\n if (matchingLib && matchingLib.value !== this.selectedLibrary) {\n this.selectLibrary(matchingLib.value, true);\n }\n }\n }\" x-on:click.away=\"open = false\" x-init=\"init()\">\n <div class=\"relative\">\n \n <button \n @click=\"open = !open\"\n type=\"button\"\n class=\"flex justify-between items-center px-2 py-1.5 bg-white rounded-md border border-[#1A1414]\/10 transition-colors hover:bg-gray-50 focus:outline-none min-w-[180px] shadow-sm\"\n >\n <div class=\"flex gap-2 items-center\">\n <img \n :src=\"getSelectedLibrary().icon\" \n :alt=\"getSelectedLibrary().name\"\n :width=\"20\"\n :height=\"20\"\n class=\"flex-shrink-0 w-5 h-5\"\n \/>\n <span class=\"text-black-100 font-medium text-[14px]\">\n <span x-text=\"getSelectedLibrary().name\"><\/span>\n <span x-show=\"getSelectedLibrary().isNew\" class=\"new-badge ml-1\">New<\/span>\n <\/span>\n <\/div>\n <svg \n class=\"w-3.5 h-3.5 text-gray-400 transition-transform duration-200\" \n :class=\"{ 'rotate-180': open }\"\n fill=\"none\" \n stroke=\"currentColor\" \n viewBox=\"0 0 24 24\"\n >\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M19 9l-7 7-7-7\"><\/path>\n <\/svg>\n <\/button>\n \n \n <div \n x-show=\"open\"\n x-transition:enter=\"transition ease-out duration-200\"\n x-transition:enter-start=\"opacity-0 translate-y-1\"\n x-transition:enter-end=\"opacity-100 translate-y-0\"\n x-transition:leave=\"transition ease-in duration-150\"\n x-transition:leave-start=\"opacity-100 translate-y-0\"\n x-transition:leave-end=\"opacity-0 translate-y-1\"\n class=\"overflow-auto absolute left-0 top-full z-50 mt-1 w-full max-h-[300px] bg-white rounded-md border border-[#1A1414]\/10 shadow-lg focus:outline-none\"\n style=\"display: none;\"\n >\n <ul class=\"py-1\">\n <template x-for=\"library in libraries\" :key=\"library.value\">\n <li>\n <button\n @click=\"selectLibrary(library.value)\"\n type=\"button\"\n class=\"flex gap-2 items-center px-2 py-1.5 w-full transition-colors hover:bg-gray-50\"\n :class=\"{ 'bg-yellow-50': selectedLibrary === library.value }\"\n >\n <img \n :src=\"library.icon\" \n :alt=\"library.name\"\n :width=\"20\"\n :height=\"20\"\n class=\"flex-shrink-0 w-5 h-5\"\n \/>\n <span class=\"text-black-100 text-[14px]\" x-text=\"library.name\"><\/span>\n <span x-show=\"library.isNew\" class=\"new-badge ml-1\">New<\/span>\n <span x-show=\"selectedLibrary === library.value\" class=\"ml-auto text-yellow-400\">\n <svg class=\"w-3.5 h-3.5\" fill=\"currentColor\" viewBox=\"0 0 20 20\">\n <path fill-rule=\"evenodd\" d=\"M16.707 5.293a1 1 0 010 1.414l-8 8a1 1 0 01-1.414 0l-4-4a1 1 0 011.414-1.414L8 12.586l7.293-7.293a1 1 0 011.414 0z\" clip-rule=\"evenodd\"><\/path>\n <\/svg>\n <\/span>\n <\/button>\n <\/li>\n <\/template>\n <\/ul>\n <\/div>\n <\/div>\n <div class=\"flex items-center\">\n <span data-seed=\"dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"snippet-copy cursor-pointer flex items-center gap-1.5 px-2.5 py-1.5 text-sm text-black-100 rounded-md border border-[#1A1414]\/10 bg-white hover:bg-gray-50 transition-colors\" title=\"Copy to clipboard!\">\n <span class=\"icon-copy02 leading-none text-[14px]\"><\/span>\n <span class=\"text-[14px]\">Copy<\/span>\n <\/span>\n <\/div>\n <\/div>\n\n <div class=\"bg-[#30302F] rounded-md font-light !font-ibmplex\">\n <div id=\"curl-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\"class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-bash\">curl \"https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product?api_key=YOUR-API-KEY&query=B0DPDRNSXV\"<\/code><\/pre>\n <\/div>\n <div id=\"python-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content active\">\n <pre><code class=\"language-python\"><pre><code class=\"language-python\"># Install the Python Requests library:\n# pip install requests\nimport requests\n\ndef send_request():\n response = requests.get(\n url='https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product',\n params={\n 'api_key': 'YOUR-API-KEY',\n 'query': 'B0DPDRNSXV',\n },\n\n )\n print('Response HTTP Status Code: ', response.status_code)\n print('Response HTTP Response Body: ', response.content)\nsend_request()\n<\/code><\/pre>\n<\/code><\/pre>\n <\/div>\n <div id=\"node-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-javascript\"><pre><code class=\"language-javascript\">\/\/ Install the Node Axios package\n\/\/ npm install axios\nconst axios = require('axios');\n\naxios.get('https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product', {\n params: {\n 'api_key': 'YOUR-API-KEY',\n 'url': 'YOUR-URL',\n 'query': B0DPDRNSXV,\n }\n}).then(function (response) {\n \/\/ handle success\n console.log(response);\n})\n<\/code><\/pre>\n<\/code><\/pre>\n <\/div>\n <div id=\"java-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-java\">import java.io.IOException;\nimport org.apache.http.client.fluent.*;\n\npublic class SendRequest\n{\n public static void main(String[] args) {\n sendRequest();\n }\n\n private static void sendRequest() {\n\n \/\/ Classic (GET )\n try {\n\n \/\/ Create request\n \n Content content = Request.Get(\"https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product?api_key=YOUR-API-KEY&url=YOUR-URL&query=B0DPDRNSXV\")\n\n \/\/ Fetch request and return content\n .execute().returnContent();\n\n \/\/ Print content\n System.out.println(content);\n }\n catch (IOException e) { System.out.println(e); }\n }\n}\n<\/code><\/pre>\n <\/div>\n <div id=\"ruby-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-ruby\">require 'net\/http'\nrequire 'net\/https'\n\n# Classic (GET )\ndef send_request \n uri = URI('https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product?api_key=YOUR-API-KEY&url=YOUR-URL&query=B0DPDRNSXV')\n\n # Create client\n http = Net::HTTP.new(uri.host, uri.port)\n http.use_ssl = true\n http.verify_mode = OpenSSL::SSL::VERIFY_PEER\n\n # Create Request\n req = Net::HTTP::Get.new(uri)\n\n # Fetch Request\n res = http.request(req)\n puts \"Response HTTP Status Code: #{ res.code }\"\n puts \"Response HTTP Response Body: #{ res.body }\"\nrescue StandardError => e\n puts \"HTTP Request failed (#{ e.message })\"\nend\n\nsend_request()<\/code><\/pre>\n <\/div>\n <div id=\"php-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-php\">&lt;?php\n\n\/\/ get cURL resource\n$ch = curl_init();\n\n\/\/ set url \ncurl_setopt($ch, CURLOPT_URL, 'https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product?api_key=YOUR-API-KEY&url=YOUR-URL&query=B0DPDRNSXV');\n\n\/\/ set method\ncurl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');\n\n\/\/ return the transfer as a string\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);\n\n\n\n\/\/ send the request and save response to $response\n$response = curl_exec($ch);\n\n\/\/ stop if fails\nif (!$response) {\n die('Error: \"' . curl_error($ch) . '\" - Code: ' . curl_errno($ch));\n}\n\necho 'HTTP Status Code: ' . curl_getinfo($ch, CURLINFO_HTTP_CODE) . PHP_EOL;\necho 'Response Body: ' . $response . PHP_EOL;\n\n\/\/ close curl resource to free up system resources\ncurl_close($ch);\n&gt;<\/code><\/pre>\n <\/div>\n <div id=\"go-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-go\">package main\n\nimport (\n\t\"fmt\"\n\t\"io\/ioutil\"\n\t\"net\/http\"\n)\n\nfunc sendClassic() {\n\t\/\/ Create client\n\tclient := &http.Client{}\n\n\t\/\/ Create request \n\treq, err := http.NewRequest(\"GET\", \"https:\/\/app.scrapingbee.com\/api\/v1\/amazon\/product?api_key=YOUR-API-KEY&url=YOUR-URL&query=B0DPDRNSXV\", nil)\n\n\n\tparseFormErr := req.ParseForm()\n\tif parseFormErr != nil {\n\t\tfmt.Println(parseFormErr)\n\t}\n\n\t\/\/ Fetch Request\n\tresp, err := client.Do(req)\n\n\tif err != nil {\n\t\tfmt.Println(\"Failure : \", err)\n\t}\n\n\t\/\/ Read Response Body\n\trespBody, _ := ioutil.ReadAll(resp.Body)\n\n\t\/\/ Display Results\n\tfmt.Println(\"response Status : \", resp.Status)\n\tfmt.Println(\"response Headers : \", resp.Header)\n\tfmt.Println(\"response Body : \", string(respBody))\n}\n\nfunc main() {\n sendClassic()\n}<\/code><\/pre>\n <\/div>\n <div id=\"cli-dbac6bf1ced5ca4cf8c5d8d8c24ec44f\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-bash\"># Install the ScrapingBee CLI:\n# pip install scrapingbee-cli\n\nscrapingbee amazon-product \"B0DPDRNSXV\"\n<\/code><\/pre>\n <\/div>\n <\/div>\n<\/div>\n\n<p>Here is a breakdown of all the parameters you can use with the Amazon Product API:<\/p>"},{"title":"Amazon ASIN Scraper API Tool - Free Credits, Simple Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-asin-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-asin-api\/","description":{}},{"title":"Amazon Keyword Scraper API - Free Credits & Easy Use","link":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-keyword-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-keyword-scraper-api\/","description":{}},{"title":"Amazon Review Scraper with Free Credits - Easy to Use Tool","link":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-review-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-review-api\/","description":{}},{"title":"Amazon Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-scraping-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/amazon-scraping-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape Amazon product data worldwide with our powerful web scraping API. Get prices, reviews, and rankings from any Amazon domain - all with a single API call.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Amazon Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Amazon Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape Amazon product data worldwide with our powerful web scraping API. Get prices, reviews, and rankings from any Amazon domain - all with a single API call.<\/p>"},{"title":"Amazon Scraper API | Scraping Amazon Product Data is Simple","link":"https:\/\/www.scrapingbee.com\/features\/amazon\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/amazon\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Get structured JSON for Amazon products, reviews, pricing and more in a single API call.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"154\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n <h1 class=\"mb-[14px] text-[40px] md:text-[48px] lg:text-[56px] leading-[1.22] font-bold \">Amazon Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Get structured JSON for Amazon products, reviews, pricing and more in a single API call.<\/p>"},{"title":"Apify alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/apify-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/apify-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Apify alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Apify. Looking for more flexibility, better pricing, and developer-friendly features?<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No marketplace. No \"actors.\" Just clean, efficient scraping.<\/h3>\n <p>Apify's approach adds layers of abstraction and complexity. We believe in giving developers direct access to scraping functionality through a clear, <a href=\"https:\/\/www.scrapingbee.com\/blog\/six-characteristics-of-rest-api\/\">RESTful API<\/a>.<\/p>"},{"title":"Apple App Store Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/apple-app-store\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/apple-app-store\/","description":{}},{"title":"Apple Music Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/apple-music-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/apple-music-api\/","description":{}},{"title":"ASOS Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/asos-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/asos-api\/","description":{}},{"title":"Audible Scraper API - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/audible-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/audible-api\/","description":{}},{"title":"Autoscout24 Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/autoscout24-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/autoscout24-api\/","description":{}},{"title":"Autotrader Scraper API - Free Sign Up + Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/autotrader-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/autotrader-api\/","description":{}},{"title":"AWS Scraper API - Free Signup & Get Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/aws-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/aws-api\/","description":{}},{"title":"Baidu Search Scraper API - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/baidu-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/baidu-search-api\/","description":{}},{"title":"Bandcamp Scraper API - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/bandcamp-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bandcamp-api\/","description":{}},{"title":"BBB Scraper API with Free Credits - Reliable Data Extraction","link":"https:\/\/www.scrapingbee.com\/scrapers\/better-business-bureau-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/better-business-bureau-api\/","description":{}},{"title":"Best Buy Web Scraper API - Simple Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/best-buy-web-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/best-buy-web-scraper-api\/","description":{}},{"title":"Binance Scraper API - Get Free Credits with Simple Integration","link":"https:\/\/www.scrapingbee.com\/scrapers\/binance-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/binance-api\/","description":{}},{"title":"Bing Ads Scraper API - Easy Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/bing-ads-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bing-ads-api\/","description":{}},{"title":"Bing Images Scraper API - Free Credits & Hassle-Free Setup","link":"https:\/\/www.scrapingbee.com\/scrapers\/bing-images-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bing-images-api\/","description":{}},{"title":"Bing Maps Scraper API - Signup for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/bing-maps-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bing-maps-api\/","description":{}},{"title":"Bing Search Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/bing-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bing-search-api\/","description":{}},{"title":"Bing Videos Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/bing-videos-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bing-videos-api\/","description":{}},{"title":"Bizbuysell Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/bizbuysell-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bizbuysell-api\/","description":{}},{"title":"Bloomberg Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/bloomberg-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/bloomberg-api\/","description":{}},{"title":"Boligsiden Scraper API - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/boligsiden-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/boligsiden-api\/","description":{}},{"title":"Booking.com Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/booking-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/booking-api\/","description":{}},{"title":"Bright Data alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/bright-data-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/bright-data-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Bright Data alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Bright Data. Getting structured data from the web should be fast, reliable, and scalable.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Enterprise-grade features. Without enterprise-grade headaches.<\/h3>\n <p>Bright Data is powerful\u2014but complex, expensive, and overkill for most. ScrapingBee delivers what you need without the overhead.<\/p>"},{"title":"Browse AI alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/browse-ai-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/browse-ai-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Browse AI alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Browse AI. Powerful scraping doesn&#39;t have to come with hidden fees or steep learning curves.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No robots. No waiting. Just raw scraping speed.<\/h3>\n <p>Browse AI works well for beginners, but if you're running real-time scraping at scale, you need something more. That's where an API-first approach wins.<\/p>"},{"title":"CapItol Trades Scraper API - Free Credits Upon Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/capitol-trades-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/capitol-trades-api\/","description":{}},{"title":"Car Rental Data Scraper API - Free Signup and Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/car-rental-data-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/car-rental-data-api\/","description":{}},{"title":"Carfax Scraper API - Simplify Data Extraction - Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/carfax-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/carfax-api\/","description":{}},{"title":"Cargurus Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/cargurus-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/cargurus-api\/","description":{}},{"title":"Cars.com Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/cars.com-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/cars.com-api\/","description":{}},{"title":"Cex Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/cex-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/cex-api\/","description":{}},{"title":"ChatGPT Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/chatgpt-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/chatgpt-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape ChatGPT responses automatically with our powerful ChatGPTscraping API. Scrape ChatGPT at scale and recieve structured JSON output, allowing you to extract text for training your AI models.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n ChatGPT Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">ChatGPT Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape ChatGPT responses automatically with our powerful ChatGPTscraping API. Scrape ChatGPT at scale and recieve structured JSON output, allowing you to extract text for training your AI models.<\/p>"},{"title":"ChatGPT Scraper API | Scrape Any Website Using ChatGPT","link":"https:\/\/www.scrapingbee.com\/features\/chatgpt\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/chatgpt\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Generate AI-powered text responses with GPT-4o in a single API call, with optional web search capabilities.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"154\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n <h1 class=\"mb-[14px] text-[40px] md:text-[48px] lg:text-[56px] leading-[1.22] font-bold \">ChatGPT Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Generate AI-powered text responses with GPT-4o in a single API call, with optional web search capabilities.<\/p>"},{"title":"Chewy Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/chewy-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/chewy-api\/","description":{}},{"title":"Chrono24 Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/chrono24-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/chrono24-api\/","description":{}},{"title":"CLI","link":"https:\/\/www.scrapingbee.com\/documentation\/cli\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/cli\/","description":"<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"installation\">Installation<\/h2>\n<p><strong>Recommended<\/strong> \u2014 install with <a href=\"https:\/\/docs.astral.sh\/uv\/\" >uv<\/a> (no virtual environment needed):<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>curl -LsSf https:\/\/astral.sh\/uv\/install.sh | sh\n<\/span><\/span><span style=\"display:flex;\"><span>uv tool install scrapingbee-cli\n<\/span><\/span><\/code><\/pre><\/div><p><strong>Alternative<\/strong> \u2014 install with pip in a virtual environment:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>pip install scrapingbee-cli\n<\/span><\/span><\/code><\/pre><\/div><p>Verify the installation:<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>scrapingbee --version\n<\/span><\/span><\/code><\/pre><\/div><\/div>\n<\/div>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"authentication\">Authentication<\/h2>\n<p>Save your API key so all commands can use it automatically.<\/p>\n<p><strong>Interactive prompt<\/strong> (recommended for first-time setup):<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>scrapingbee auth\n<\/span><\/span><\/code><\/pre><\/div><p><strong>Non-interactive<\/strong> (CI\/CD, scripts):<\/p>\n<div class=\"highlight\"><pre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"><code class=\"language-bash\" data-lang=\"bash\"><span style=\"display:flex;\"><span>scrapingbee auth --api-key YOUR_API_KEY\n<\/span><\/span><\/code><\/pre><\/div><p><strong>Environment variable<\/strong> (alternative \u2014 no file stored):<\/p>"},{"title":"Cloudflare Scraper API - Simple Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/cloudflare-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/cloudflare-scraper-api\/","description":{}},{"title":"Coingecko Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/coingecko-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/coingecko-api\/","description":{}},{"title":"Coles Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/coles-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/coles-api\/","description":{}},{"title":"Contact Scraper API - Free Credits, Simple & Reliable Tool","link":"https:\/\/www.scrapingbee.com\/scrapers\/contact-info-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/contact-info-api\/","description":{}},{"title":"Cookie Policy","link":"https:\/\/www.scrapingbee.com\/cookie-policy\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/cookie-policy\/","description":"<h2 id=\"1-information-and-transparency\">1. Information and transparency<\/h2>\n<p>VostokInc respects the privacy of its Users. This Cookies Policy applies to the Cookies used on the Website. It describes the information We collect automatically through the use of automated information gathering tools such as cookies and web beacons.<\/p>\n<p>Terms not otherwise defined herein shall have the meaning as set forth in the Privacy Policy.<\/p>\n<h2 id=\"2-what-is-a-cookie\">2. What is a cookie?<\/h2>\n<p><strong>\u201cCookies\u201d<\/strong> or <strong>\u201cTracers\u201d<\/strong> means tracers that can be deposited or read, for example, when consulting a website, a mobile applicable, or when setting up or using a software. A cookie may include:<\/p>"},{"title":"Copart Scraper API - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/copart-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/copart-api\/","description":{}},{"title":"Costco Scraping API","link":"https:\/\/www.scrapingbee.com\/scrapers\/costco-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/costco-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape Costco product details and wholesale product data with our specialized scraping API. Get prices, specifications, and product features with perfect unmatched reliability.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Costco Scraping API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Costco Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape Costco product details and wholesale product data with our specialized scraping API. Get prices, specifications, and product features with perfect unmatched reliability.<\/p>"},{"title":"Craigslist Scraper API - Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/craigslist-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/craigslist-api\/","description":{}},{"title":"Crawlbase alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/crawlbase-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/crawlbase-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Crawlbase alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Crawlbase. When it comes to scalable and robust data scraping, there are more efficient alternatives that won\u2019t break the bank.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Scraping shouldn't be tied to complicated setups.<\/h3>\n <p>Crawlbase offers advanced features but at a high cost. Get access to all the scraping features you need, without the complexity.<\/p>"},{"title":"Crawlera alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/crawlera-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/crawlera-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Crawlera alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Crawlera. Avoid paying exorbitant rates for your web scraping.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Simple API, powerful features!<\/h3>\n <p>Compared to Crawlera's complex usage, ScrapingBee easy-to-use API allows you to quickly get-up and running!<\/p>"},{"title":"Crexi Scraper API - Simple Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/crexi-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/crexi-scraper-api\/","description":{}},{"title":"Crunchbase Scraper API Tool - Free Credits & Easy Setup","link":"https:\/\/www.scrapingbee.com\/scrapers\/crunchbase-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/crunchbase-api\/","description":{}},{"title":"Crypto News Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/crypto-news-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/crypto-news-api\/","description":{}},{"title":"Crypto.com Scraper API - Start Scraping Free with Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/crypto.com-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/crypto.com-api\/","description":{}},{"title":"Daraz Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/daraz-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/daraz-api\/","description":{}},{"title":"Data Analysis Immobiliare API - Simple Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/data-analysis-immobiliare\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/data-analysis-immobiliare\/","description":{}},{"title":"Data Extraction","link":"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/","description":"<blockquote>\n<p>\ud83d\udca1 <strong>Important<\/strong>:<br>This page explains how to use a specific feature of our main <a href=\"https:\/\/www.scrapingbee.com\/\" >web scraping API<\/a>!<br>If you are not yet familiar with ScrapingBee web scraping API, you can read the documentation <a href=\"https:\/\/www.scrapingbee.com\/documentation\" >here<\/a>.<\/p>\n<\/blockquote>\n<h2 id=\"basic-usage\">Basic usage<\/h2>\n<p>If you want to extract data from pages and don't want to parse the HTML on your side, you can add extraction rules to your API call.<\/p>\n<p>The simplest way to use extraction rules is to use the following format<\/p>"},{"title":"Data extraction in Go","link":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-go\/","description":"<p>One of the most important features of ScrapingBee, is the ability to extract exact data without need to post-process the request\u2019s content using external libraries.<\/p>\n<p>We can use this feature by specifying an additional parameter with the name\u00a0<code>extract_rules<\/code>. We specify the label of elements we want to extract, their CSS Selectors and ScrapingBee will do the rest!<\/p>\n<p>Let\u2019s say that we want to extract the title &amp; the subtitle of the\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction documentation page<\/a>. Their CSS selectors are\u00a0<code>h1<\/code>\u00a0and\u00a0<code>span.text-[20px] respectively<\/code>. To make sure that they\u2019re the correct ones, you can use the JavaScript function:\u00a0<code>document.querySelector(&quot;CSS_SELECTOR&quot;)<\/code>\u00a0in that page\u2019s developer tool\u2019s console.<\/p>"},{"title":"Data extraction in NodeJS","link":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-nodejs\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-nodejs\/","description":"<p>One of the most important features of ScrapingBee, is the ability to extract exact data without need to post-process the request\u2019s content using external libraries.<\/p>\n<p>We can use this feature by specifying an additional parameter with the name\u00a0<code>extract_rules<\/code>. We specify the label of elements we want to extract, their CSS Selectors and ScrapingBee will do the rest!<\/p>\n<p>Let\u2019s say that we want to extract the title &amp; the subtitle of the\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction documentation page<\/a>. Their CSS selectors are\u00a0<code>h1<\/code>\u00a0and\u00a0<code>span.text-[20px]<\/code>\u00a0respectively. To make sure that they\u2019re the correct ones, you can use the JavaScript function:\u00a0<code>document.querySelector(&quot;CSS_SELECTOR&quot;)<\/code>\u00a0in that page\u2019s developer tool\u2019s console.<\/p>"},{"title":"Data extraction in PHP","link":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-php\/","description":"<p>One of the most important features of ScrapingBee, is the ability to extract exact data without need to post-process the request\u2019s content using external libraries.<\/p>\n<p>We can use this feature by specifying an additional parameter with the name\u00a0<code>extract_rules<\/code>. We specify the label of elements we want to extract, their CSS Selectors and ScrapingBee will do the rest!<\/p>\n<p>Let\u2019s say that we want to extract the title &amp; the subtitle of the\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction documentation page<\/a>. Their CSS selectors are\u00a0<code>h1<\/code>\u00a0and\u00a0<code>span.text-[20px]<\/code>\u00a0respectively. To make sure that they\u2019re the correct ones, you can use the JavaScript function:\u00a0<code>document.querySelector(&quot;CSS_SELECTOR&quot;)<\/code>\u00a0in that page\u2019s developer tool\u2019s console.<\/p>"},{"title":"Data extraction in Python","link":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-python\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-python\/","description":"<p>One of the most important features of ScrapingBee, is the ability to extract exact data without need to post-process the request\u2019s content using external libraries.<\/p>\n<p>We can use this feature by specifying an additional parameter with the name\u00a0<code>extract_rules<\/code>. We specify the label of elements we want to extract, their CSS Selectors and ScrapingBee will do the rest!<\/p>\n<p>Let\u2019s say that we want to extract the title &amp; the subtitle of the\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction documentation page<\/a>. Their CSS selectors are <code>h1<\/code> and <code>span.text-[20px]<\/code> respectively. To make sure that they\u2019re the correct ones, you can use the JavaScript function:\u00a0<code>document.querySelector(&quot;CSS_SELECTOR&quot;)<\/code>\u00a0in that page\u2019s developer tool\u2019s console.<\/p>"},{"title":"Data extraction in Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/data-extraction-in-ruby\/","description":"<p>One of the most important features of ScrapingBee, is the ability to extract exact data without need to post-process the request\u2019s content using external libraries.<\/p>\n<p>We can use this feature by specifying an additional parameter with the name\u00a0<code>extract_rules<\/code>. We specify the label of elements we want to extract, their CSS Selectors and ScrapingBee will do the rest!<\/p>\n<p>Let\u2019s say that we want to extract the title &amp; the subtitle of the\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/data-extraction\/\" >data extraction documentation page<\/a>. Their CSS selectors are\u00a0<code>h1<\/code>\u00a0and\u00a0<code>span.text-[20px]<\/code>\u00a0respectively. To make sure that they\u2019re the correct ones, you can use the JavaScript function:\u00a0<code>document.querySelector(&quot;CSS_SELECTOR&quot;)<\/code>\u00a0in that page\u2019s developer tool\u2019s console.<\/p>"},{"title":"Data Processing Agreement","link":"https:\/\/www.scrapingbee.com\/data-processing-agreement\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/data-processing-agreement\/","description":"<p>ScrapingBee \/ VostokInc<\/p>\n<p>Last Revision date: March 23, 2026<\/p>\n<p><strong>Processor \/ Provider:<\/strong> VostokInc, a French simplified joint-stock company (SAS), registered under SIREN 882 964 115 RCS Paris, with its registered office at 66 Avenue des Champs-\u00c9lys\u00e9es, 75008 Paris, France, operating the ScrapingBee platform.<\/p>\n<p><strong>Controller \/ Customer:<\/strong> The entity or individual that has created a ScrapingBee account and accepted the Terms of Service, as identified in that account.<\/p>\n<p><strong>ACCEPTANCE:<\/strong> By activating a ScrapingBee account, placing an order, or using the Services, the Customer agrees to this Data Processing Agreement. If the Customer does not agree, it must not use the Services. This DPA forms part of, and is subject to, the ScrapingBee General Terms and Conditions of Service available at <a href=\"https:\/\/www.scrapingbee.com\/terms-and-conditions\/\" >https:\/\/www.scrapingbee.com\/terms-and-conditions\/<\/a>. In the event of any conflict between this DPA and the General Terms and Conditions of Service regarding the processing of Personal Data, this DPA shall prevail.<\/p>"},{"title":"Data Protection \/ GDPR Notice","link":"https:\/\/www.scrapingbee.com\/gdpr\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/gdpr\/","description":"<p>The General Data Protection Regulation (GDPR) is European Union legislation to strengthen and unify data protection laws for all individuals within the European Union. The regulation came into effect from May 25th, 2018.<\/p>\n<p>As a French business, founded and run by French citizens, but also as people who value privacy, we are fully committed to being compliant with GDPR and all data protection best practices.<\/p>\n<p>This page lays out our commitment to data protection and makes transparent what data we store about our users.<\/p>"},{"title":"Data Scraping API","link":"https:\/\/www.scrapingbee.com\/features\/data-extraction\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/data-extraction\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Extracting data has never been more simple with CSS or XPATH selectors and ScrapingBee.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">Data Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Extracting data has never been more simple with CSS or XPATH selectors and ScrapingBee.<\/p>"},{"title":"Decodo alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/decodo-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/decodo-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Decodo alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Decodo. Looking for a better balance of pricing, speed, and features? It\u2019s time to explore the alternatives to Decodo.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No unnecessary steps. Just clean scraping.<\/h3>\n <p>Decodo offers scraping services but complicates things with additional features. We make it easy\u2014<a href=\"https:\/\/www.scrapingbee.com\/blog\/what-is-web-scraping-and-how-to-scrape-any-website-tutorial\/\">scrape the web<\/a> without all the extra fluff.<\/p>"},{"title":"Depop Scraper API - Easy Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/depop-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/depop-scraper-api\/","description":{}},{"title":"Dexscreener Scraper API - Start with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/dexscreener-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/dexscreener-api\/","description":{}},{"title":"Diffbot alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/diffbot-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/diffbot-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Diffbot alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Diffbot. Efficient and accurate data extraction doesn\u2019t have to come with a hefty price tag or a steep learning curve.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just structured data. Just straightforward scraping.<\/h3>\n <p>Diffbot is great for extracting structured data but can get expensive quickly. Why pay more for specific use cases when you can scrape the entire web with better pricing?<\/p>"},{"title":"Doctolib Scraper API - Sign Up for Free Credits Now","link":"https:\/\/www.scrapingbee.com\/scrapers\/doctolib-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/doctolib-api\/","description":{}},{"title":"Doctoralia Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/doctoralia-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/doctoralia-api\/","description":{}},{"title":"DuckDuckGo News Scraper API - Effortless Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/duckduckgo-news-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/duckduckgo-news-api\/","description":{}},{"title":"DuckDuckGo Search Scraper API - Simple Signup Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/duckduckgo-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/duckduckgo-search-api\/","description":{}},{"title":"eBay Related Searches Scraper API - Get Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/ebay-related-searches-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/ebay-related-searches-api\/","description":{}},{"title":"eBay Scraper Tool - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/ebay-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/ebay-scraper\/","description":{}},{"title":"Ecommerce Scraping Tool - Free Credits & Easy API Setup","link":"https:\/\/www.scrapingbee.com\/scrapers\/ecommerce-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/ecommerce-api\/","description":{}},{"title":"eToro Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/etoro-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/etoro-api\/","description":{}},{"title":"Etsy Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/etsy-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/etsy-api\/","description":{}},{"title":"Expedia Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/expedia-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/expedia-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape global hotel data, pricing and details with our scraping API. Get rates, reviews, and property information from any destination with perfect accuracy.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Expedia Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Expedia Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape global hotel data, pricing and details with our scraping API. Get rates, reviews, and property information from any destination with perfect accuracy.<\/p>"},{"title":"Expireddomains Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/expireddomains-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/expireddomains-scraper-api\/","description":{}},{"title":"Fanduel Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/fanduel-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/fanduel-api\/","description":{}},{"title":"Fare Scraper API - Get Free Credits Upon Sign-Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/fare-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/fare-api\/","description":{}},{"title":"Fast Search API","link":"https:\/\/www.scrapingbee.com\/documentation\/fast-search\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/fast-search\/","description":"<div class=\"w-full param_table\">\n <div>\n <div class=\"overscroll-x-auto pb-[30px] md:pb-[0] max-w-full overflow-x-auto\">\n <div class=\"border rounded-md min-w-[500px] md:min-w-[0] overflow-hidden border-[#C8C4C4] shadow-sm bg-white\">\n <div class=\"flex border-b border-[#C8C4C4] bg-[#F4F0F0]\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">name<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">type<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 mx-[2px] text-[12px]\">default<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 px-[16px] py-[8px] relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"font-bold text-black-200\">Description<\/span>\n <\/div>\n <\/div>\n \n \n <div class=\"flex border-b\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">api_key<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">string<\/code>]<\/span>\n <span class=\"text-black-200\"><code class=\"bg-[#EAEEF6] rounded-[4px] text-[#393C40] inline-block px-2 py-0.5 font-menlo mx-[2px] text-[12px]\">required<\/code><\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Your api key<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap border-[#C8C4C4]\">\n \n <span class=\"text-black-200\"><a href=\"#api_key\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n \n \n \n \n <div class=\"flex border-b\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">search<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">string<\/code>]<\/span>\n <span class=\"text-black-200\"><code class=\"bg-[#EAEEF6] rounded-[4px] text-[#393C40] inline-block px-2 py-0.5 font-menlo mx-[2px] text-[12px]\">required<\/code><\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The text you would put in the search bar<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap border-[#C8C4C4]\">\n \n <span class=\"text-black-200\"><a href=\"#search\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">country_code<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;us&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Country code used to localize results (ISO 3166-1 alpha-2)<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#country_code\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">language<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;en&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Language of the search results (e.g. en, fr)<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#language\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">page<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">integer<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">1<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The page number you want to extract results from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#page\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n <\/div>\n <\/div>\n <\/div>\n<\/div>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"getting-started\">Getting Started<\/h2>\n<p>Our Fast Search API delivers search results in under a second, with a simple, lightweight request.<\/p>"},{"title":"Finviz Scraper API - Get Started with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/finviz-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/finviz-api\/","description":{}},{"title":"Fiverr Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/fiverr-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/fiverr-scraper-api\/","description":{}},{"title":"Flight Scraper API Tool - Easy Setup with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/flight-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/flight-api\/","description":{}},{"title":"Flipkart Scraper API - Free Signup Credits Offer","link":"https:\/\/www.scrapingbee.com\/scrapers\/flipkart-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/flipkart-api\/","description":{}},{"title":"Food Data Scraper API - Free Signup and Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/food-data-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/food-data-api\/","description":{}},{"title":"Football News API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/football-news-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/football-news-api\/","description":{}},{"title":"Fox News Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/fox-news-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/fox-news-scraper-api\/","description":{}},{"title":"Free Indeed Scraper API with Credits - Easy Data Extraction","link":"https:\/\/www.scrapingbee.com\/scrapers\/indeed-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/indeed-api\/","description":{}},{"title":"Freelancer Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/freelancer-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/freelancer-scraper-api\/","description":{}},{"title":"Frequently Asked Questions - ScrapingBee","link":"https:\/\/www.scrapingbee.com\/faq\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/faq\/","description":{}},{"title":"Funda Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/funda-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/funda-scraper-api\/","description":{}},{"title":"Funda Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/funda-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/funda-api\/","description":{}},{"title":"G2 Scraper API Tool - Starting is Simple with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/g2-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/g2-api\/","description":{}},{"title":"Gamestop Scraper API - Simple Signup Credits Free","link":"https:\/\/www.scrapingbee.com\/scrapers\/gamestop-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/gamestop-api\/","description":{}},{"title":"Gasbuddy Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/gasbuddy-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/gasbuddy-scraper-api\/","description":{}},{"title":"GENERAL TERMS AND CONDITIONS OF SERVICE","link":"https:\/\/www.scrapingbee.com\/terms-and-conditions\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/terms-and-conditions\/","description":"<h2 id=\"1-preamble\">1. Preamble<\/h2>\n<p><strong>VostokInc<\/strong>, a joint-stock company (\u201csoci\u00e9t\u00e9 par actions simplifi\u00e9e\u201d) with registered address located at 66 Avenue des Champs \u00c9lys\u00e9es \u2013 75008 Paris and registered before the Company House of Paris under number 843 352 683 (<strong>&quot;VostokInc&quot;<\/strong> or the <strong>&quot;Provider&quot;<\/strong>) has developed an online solution available at <a href=\"https:\/\/dashboard.scrapingbee.com\" >https:\/\/dashboard.scrapingbee.com<\/a> and\/or at any other address, application, or location designated by VostokInc (the <strong>&quot;API&quot;<\/strong> or <strong>\u201cScrapingBee\u201d<\/strong>) providing web scraping services (the &quot;Services&quot;).<\/p>\n<p>The present terms and conditions of service (the <strong>&quot;General Conditions&quot;<\/strong>) govern the contractual relationship between VostokInc and any natural person aged at least 18 years old with full and complete legal capacity acting in the scope of their professional activity or being the legal representative of a legal entity empowered to enter into legally binding commitments which access the Services only for their professional activities whatever the conditions from whichever terminal, nature, and extent of the subscription to the Services (hereinafter the <strong>\u201cUser\u201d<\/strong>). User acknowledges and accepts that Services are dedicated to professional activities and as such consumer law is not intended to be applicable. The General Conditions, the Data Processing Agreement, the AUP, and their exhibits form altogether the <strong>\u201cAgreement\u201d<\/strong>.<\/p>"},{"title":"Getty Images Scraper API - Easy Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/getty-images-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/getty-images-scraper-api\/","description":{}},{"title":"Getyourguide Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/getyourguide-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/getyourguide-scraper-api\/","description":{}},{"title":"GitHub Scraper API - Easy Setup & Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/github-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/github-api\/","description":{}},{"title":"Glassdoor Jobs Scraper API - Get Free Credits Upon Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/glassdoor-jobs-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/glassdoor-jobs-api\/","description":{}},{"title":"Glovo Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/glovo-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/glovo-api\/","description":{}},{"title":"Gofundme Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/gofundme-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/gofundme-api\/","description":{}},{"title":"Goodreads Scraper API - Get Free Credits Now","link":"https:\/\/www.scrapingbee.com\/scrapers\/goodreads-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/goodreads-api\/","description":{}},{"title":"Google Ads Scraper API - Signup for Credits Free","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-ads-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-ads-api\/","description":{}},{"title":"Google AI Overview Scraper API - Free Signup Credits Offer","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-ai-overview-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-ai-overview-api\/","description":{}},{"title":"Google Alerts Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-alerts-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-alerts-api\/","description":{}},{"title":"Google API","link":"https:\/\/www.scrapingbee.com\/documentation\/google-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/google-api\/","description":"<div class=\"w-full param_table\">\n <div>\n <div class=\"overscroll-x-auto pb-[30px] md:pb-[0] max-w-full overflow-x-auto\">\n <div class=\"border rounded-md min-w-[500px] md:min-w-[0] overflow-hidden border-[#C8C4C4] shadow-sm bg-white\">\n <div class=\"flex border-b border-[#C8C4C4] bg-[#F4F0F0]\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">name<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">type<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 mx-[2px] text-[12px]\">default<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 px-[16px] py-[8px] relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"font-bold text-black-200\">Description<\/span>\n <\/div>\n <\/div>\n \n \n \n \n <div class=\"flex border-b\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">api_key<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">string<\/code>]<\/span>\n <span class=\"text-black-200\"><code class=\"bg-[#EAEEF6] rounded-[4px] text-[#393C40] inline-block px-2 py-0.5 font-menlo mx-[2px] text-[12px]\">required<\/code><\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Your api key<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap border-[#C8C4C4]\">\n \n <span class=\"text-black-200\"><a href=\"#api_key\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n <div class=\"flex border-b\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">search<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">string<\/code>]<\/span>\n <span class=\"text-black-200\"><code class=\"bg-[#EAEEF6] rounded-[4px] text-[#393C40] inline-block px-2 py-0.5 font-menlo mx-[2px] text-[12px]\">required<\/code><\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The text you would put in the Google search bar<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap border-[#C8C4C4]\">\n \n <span class=\"text-black-200\"><a href=\"#search\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">add_html<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">boolean<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">false<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Adding the full html of the page in the results<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#add_html\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">country_code<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;us&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Country code from which you would like the request to come from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#country_code\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">device<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">&#34;desktop&#34; | &#34;mobile&#34;<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;desktop&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Control the device the request will be sent from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#device\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">extra_params<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Extra Google URL parameters<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#extra_params\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">language<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;en&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Language the search results will be displayed in<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#language\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">light_request<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">boolean<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">true<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Light requests are faster and cheaper (10 credits instead of 15), but some content may be missing.<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#light_request\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">nfpr<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">boolean<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">false<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Exclude results from auto-corrected queries that were spelt wrong.<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#nfpr\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">page<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">integer<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">1<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The page number you want to extract results from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#page\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">search_type<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">&#34;classic&#34; | &#34;news&#34; | &#34;maps&#34; | &#34;images&#34; | &#34;lens&#34; | &#34;shopping&#34; | &#34;ai_mode&#34;<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;classic&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The type of search you want to perform<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#search_type\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n <\/div>\n <\/div>\n <\/div>\n<\/div>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"getting-started\">Getting Started<\/h2>\n<p>Our Google Search API allows you to scrape search results pages in realtime.<\/p>"},{"title":"Google API","link":"https:\/\/www.scrapingbee.com\/documentation\/google\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/google\/","description":"<div class=\"w-full param_table\">\n <div>\n <div class=\"overscroll-x-auto pb-[30px] md:pb-[0] max-w-full overflow-x-auto\">\n <div class=\"border rounded-md min-w-[500px] md:min-w-[0] overflow-hidden border-[#C8C4C4] shadow-sm bg-white\">\n <div class=\"flex border-b border-[#C8C4C4] bg-[#F4F0F0]\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">name<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">type<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 mx-[2px] text-[12px]\">default<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 px-[16px] py-[8px] relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"font-bold text-black-200\">Description<\/span>\n <\/div>\n <\/div>\n \n \n \n \n <div class=\"flex border-b\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">api_key<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">string<\/code>]<\/span>\n <span class=\"text-black-200\"><code class=\"bg-[#EAEEF6] rounded-[4px] text-[#393C40] inline-block px-2 py-0.5 font-menlo mx-[2px] text-[12px]\">required<\/code><\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Your api key<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap border-[#C8C4C4]\">\n \n <span class=\"text-black-200\"><a href=\"#api_key\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n <div class=\"flex border-b\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">search<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo text-[12px]\">string<\/code>]<\/span>\n <span class=\"text-black-200\"><code class=\"bg-[#EAEEF6] rounded-[4px] text-[#393C40] inline-block px-2 py-0.5 font-menlo mx-[2px] text-[12px]\">required<\/code><\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The text you would put in the Google search bar<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap border-[#C8C4C4]\">\n \n <span class=\"text-black-200\"><a href=\"#search\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">add_html<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">boolean<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">false<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Adding the full html of the page in the results<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#add_html\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">country_code<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;us&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Country code from which you would like the request to come from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#country_code\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">device<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">&#34;desktop&#34; | &#34;mobile&#34;<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;desktop&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Control the device the request will be sent from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#device\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">extra_params<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Extra Google URL parameters<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#extra_params\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">language<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">string<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;en&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Language the search results will be displayed in<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#language\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">light_request<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">boolean<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">true<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Light requests are faster and cheaper (10 credits instead of 15), but some content may be missing.<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#light_request\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">nfpr<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">boolean<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">false<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">Exclude results from auto-corrected queries that were spelt wrong.<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#nfpr\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">page<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">integer<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">1<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The page number you want to extract results from<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#page\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n \n \n \n <div class=\"flex border-b last-of-type:border-0\" style=\"border-color: #C8C4C4;\">\n <div class=\"text-[12px] w-5\/12 border-r py-[8px] px-[20px] flex flex-wrap items-center border-[#C8C4C4]\">\n <span class=\"bg-[#D9D6CC] text-[#0F0F0E] rounded-[4px] inline-block px-2 py-0.5 mr-[9px] font-menlo\">search_type<\/span>\n <span class=\"text-black-200 mr-[5px] \">[<code class=\"bg-[#DAFBD7] rounded-[4px] text-[#188310] inline-block px-2 py-0.5 ml-[3px] mr-[1px] font-menlo\">&#34;classic&#34; | &#34;news&#34; | &#34;maps&#34; | &#34;images&#34; | &#34;lens&#34; | &#34;shopping&#34; | &#34;ai_mode&#34;<\/code>]<\/span>\n <span class=\"text-black-200\">(<code class=\"bg-[#FFE3F3] rounded-[4px] text-[#DB3797] inline-block px-2 py-0.5 font-menlo mx-[2px]\">&#34;classic&#34;<\/code>)<\/span>\n <\/div>\n <div class=\"w-5\/12 py-[8px] px-[16px] flex items-center flex-wrap relative\" style=\"border-color: #C8C4C4;\">\n <span class=\"text-black-200 leading-[1.50]\">The type of search you want to perform<\/span>\n <\/div>\n <div class=\"w-2\/12 py-[8px] pl-[16px] pr-[16px] flex items-center justify-end flex-wrap\">\n \n <span class=\"text-black-200\"><a href=\"#search_type\" class=\"bg-transparent border border-black-100 py-[5px] px-[10px] rounded-md !no-underline text-[13px] inline-flex items-center gap-[6px]\">Learn more<svg width=\"12\" height=\"12\" viewBox=\"0 0 12 12\" fill=\"none\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\"><path d=\"M4.5 9L7.5 6L4.5 3\" stroke=\"currentColor\" stroke-width=\"1.5\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/><\/svg><\/a><\/span>\n \n <\/div>\n <\/div>\n \n \n <\/div>\n <\/div>\n <\/div>\n<\/div>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"getting-started\">Getting Started<\/h2>\n<p>Our Google Search API allows you to scrape search results pages in realtime.<\/p>"},{"title":"Google Autocomplete Scraper API - Free Signup & Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-autocomplete-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-autocomplete-api\/","description":{}},{"title":"Google Books Scraper API - Simple Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-books-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-books-scraper-api\/","description":{}},{"title":"Google Events Scraper API - Free Signup & Simplified Process","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-events-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-events-api\/","description":{}},{"title":"Google Finance Scraper - Free Signup Credits, Simple Use","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-finance-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-finance-scraper\/","description":{}},{"title":"Google Flights Scraper - Free Signup Credits, Simple Use","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-flights-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-flights-scraper\/","description":{}},{"title":"Google Hotels Scraper - Simple Tool, Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-hotel-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-hotel-scraper\/","description":{}},{"title":"Google Image Scraper - Free Credits, Simple Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-image-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-image-scraper\/","description":{}},{"title":"Google Jobs Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-jobs-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-jobs-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape Google Jobs listings from any location with our powerful and real-time API. Get detailed job listings data with a near 100% success rate. Start with 1000 free API credits.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Google Jobs Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Google Jobs Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape Google Jobs listings from any location with our powerful and real-time API. Get detailed job listings data with a near 100% success rate. Start with 1000 free API credits.<\/p>"},{"title":"Google Lens Scraper API - Streamlined Access Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-lens-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-lens-api\/","description":{}},{"title":"Google My Business Scraper API - Easy Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-my-business-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-my-business-scraper-api\/","description":{}},{"title":"Google News Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-news-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-news-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Get to the latest headlines effortlessly with our powerful and reliable Google News Scraper API. Monitor stories, sources, and authors from any country with unmatched precision and reliability.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Google News Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Google News Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Get to the latest headlines effortlessly with our powerful and reliable Google News Scraper API. Monitor stories, sources, and authors from any country with unmatched precision and reliability.<\/p>"},{"title":"Google Patents Scraper API - Free Signup & Simplified Access","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-patents-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-patents-api\/","description":{}},{"title":"Google Play Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-play-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-play-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape Google Play Store app data at scale with our reliable web scraping API. Get ratings, reviews, and download stats with a single API call.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Google Play Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Google Play Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape Google Play Store app data at scale with our reliable web scraping API. Get ratings, reviews, and download stats with a single API call.<\/p>"},{"title":"Google Popular Times API - Simple Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-popular-times-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-popular-times-scraper-api\/","description":{}},{"title":"Google Related Questions Scraper API - Free Credits on SignUp","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-related-questions-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-related-questions-api\/","description":{}},{"title":"Google Related Searches Scraper API - Easy to Use","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-related-searches-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-related-searches-api\/","description":{}},{"title":"Google Reverse Image Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-reverse-image-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-reverse-image-api\/","description":{}},{"title":"Google Reviews Results Scraper API - Free Signup & Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-reviews-results-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-reviews-results-api\/","description":{}},{"title":"Google Scholar Scraper - Free Signup Credits & Easy Use","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-scholar-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-scholar-scraper\/","description":{}},{"title":"Google Search API - Get Free Credits for SERP Scraping","link":"https:\/\/www.scrapingbee.com\/features\/google\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/google\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Get Structured JSON for search, news, maps, ads and more in a single Google SERP scraping API call.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"154\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n <h1 class=\"mb-[14px] text-[40px] md:text-[48px] lg:text-[56px] leading-[1.22] font-bold \">Google Search API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Get Structured JSON for search, news, maps, ads and more in a single Google SERP scraping API call.<\/p>"},{"title":"Google Search Results Scraper API | ScrapingBee SERP Scraping API","link":"https:\/\/www.scrapingbee.com\/features\/fast-search\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/fast-search\/","description":"<script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Get real-time SERP results in under 1 second for your AI agents, dashboards, and production workflows.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"154\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 !pt-[100px] sm:!pt-[100px] md:!pt-[120px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n <p class=\"inline-block mb-[10px] text-[13px] font-bold tracking-[0.1em] uppercase text-black-100\">FAST SEARCH API<\/p>\n \n <h1 class=\"mb-[14px] text-[40px] md:text-[48px] lg:text-[56px] leading-[1.22] font-bold !leading-[1.1]\">Top SERP results in under 1 second for AI &amp; analytics<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Get real-time SERP results in under 1 second for your AI agents, dashboards, and production workflows.<\/p>"},{"title":"Google Showtimes Result Scraper API - Get Started Today","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-showtimes-results-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-showtimes-results-api\/","description":{}},{"title":"Google Spell Check Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-spell-check-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-spell-check-api\/","description":{}},{"title":"Google Sports Results Scraper API - Free Credits on SignUp","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-sports-results-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-sports-results-api\/","description":{}},{"title":"Google Trends Scraper - Free Credits, Simple Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-trends-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-trends-scraper\/","description":{}},{"title":"GPT API","link":"https:\/\/www.scrapingbee.com\/documentation\/chatgpt\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/chatgpt\/","description":"<p>Our Chat GPT API allows you to send prompts to a GPT model and receive AI-generated responses in realtime.<\/p>\n<p>We provide one endpoint:<\/p>\n<ul>\n<li><strong>GPT endpoint<\/strong> (<code>\/api\/v1\/chatgpt<\/code>) - Send prompts to GPT and receive AI-generated responses<\/li>\n<\/ul>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"quick-start\">Quick start<\/h2>\n<p>To use the GPT API, you only need two things:<\/p>\n<ul>\n<li>your API key, available <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/manage\/api_key\" >here<\/a><\/li>\n<li>a prompt to send to the GPT model (<a href=\"#prompt\" >learn more about prompts<\/a>)<\/li>\n<\/ul>\n<p>Then, simply do this.<\/p>\n\n\n\n\n\n \n\n\n\n \n\n \n \n \n\n<div class=\"p-1 rounded mb-6 bg-[#F4F0F0] border border-[#1A1414]\/10 text-[16px] leading-[1.50]\" data-tabs-id=\"4ca8484abc6dc4ccb5f1d28dfef5cdce\">\n\n <div class=\"md:pl-[30px] xl:pl-[32px] flex items-center justify-end gap-3 py-[10px] px-[17px]\" x-data=\"{ \n open: false, \n selectedLibrary: 'python-4ca8484abc6dc4ccb5f1d28dfef5cdce',\n libraries: [\n { name: 'Python', value: 'python-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-python.svg', width: 32, height: 32 },\n { name: 'CLI', value: 'cli-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-cli.svg', width: 32, height: 32, isNew: true },\n { name: 'cURL', value: 'curl-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-curl.svg', width: 48, height: 32 },\n { name: 'Go', value: 'go-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-go.svg', width: 32, height: 32 },\n { name: 'Java', value: 'java-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-java.svg', width: 32, height: 32 },\n { name: 'NodeJS', value: 'node-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-node.svg', width: 26, height: 26 },\n { name: 'PHP', value: 'php-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-php.svg', width: 32, height: 32 },\n { name: 'Ruby', value: 'ruby-4ca8484abc6dc4ccb5f1d28dfef5cdce', icon: '\/images\/icons\/icon-ruby.svg', width: 32, height: 32 }\n ],\n selectLibrary(value, isGlobal = false) {\n this.selectedLibrary = value;\n this.open = false;\n \/\/ Trigger tab switching for this specific instance\n \/\/ Use Alpine's $el to find the container\n const container = $el.closest('[data-tabs-id]');\n if (container) {\n container.querySelectorAll('.nice-tab-content').forEach(tab => {\n tab.classList.remove('active');\n });\n const selectedTab = container.querySelector('#' + value);\n if (selectedTab) {\n selectedTab.classList.add('active');\n }\n }\n \/\/ Individual snippet selectors should NOT trigger global changes\n \/\/ Only the global selector at the top should change all snippets\n },\n getSelectedLibrary() {\n return this.libraries.find(lib => lib.value === this.selectedLibrary) || this.libraries[0];\n },\n init() {\n \/\/ Listen for global language changes\n window.addEventListener('languageChanged', (e) => {\n const globalLang = e.detail.language;\n const matchingLib = this.libraries.find(lib => lib.value.startsWith(globalLang + '-'));\n if (matchingLib) {\n this.selectLibrary(matchingLib.value, true);\n }\n });\n \/\/ Initialize from global state if available\n const globalLang = window.globalSelectedLanguage || 'python';\n const matchingLib = this.libraries.find(lib => lib.value.startsWith(globalLang + '-'));\n if (matchingLib && matchingLib.value !== this.selectedLibrary) {\n this.selectLibrary(matchingLib.value, true);\n }\n }\n }\" x-on:click.away=\"open = false\" x-init=\"init()\">\n <div class=\"relative\">\n \n <button \n @click=\"open = !open\"\n type=\"button\"\n class=\"flex justify-between items-center px-2 py-1.5 bg-white rounded-md border border-[#1A1414]\/10 transition-colors hover:bg-gray-50 focus:outline-none min-w-[180px] shadow-sm\"\n >\n <div class=\"flex gap-2 items-center\">\n <img \n :src=\"getSelectedLibrary().icon\" \n :alt=\"getSelectedLibrary().name\"\n :width=\"20\"\n :height=\"20\"\n class=\"flex-shrink-0 w-5 h-5\"\n \/>\n <span class=\"text-black-100 font-medium text-[14px]\">\n <span x-text=\"getSelectedLibrary().name\"><\/span>\n <span x-show=\"getSelectedLibrary().isNew\" class=\"new-badge ml-1\">New<\/span>\n <\/span>\n <\/div>\n <svg \n class=\"w-3.5 h-3.5 text-gray-400 transition-transform duration-200\" \n :class=\"{ 'rotate-180': open }\"\n fill=\"none\" \n stroke=\"currentColor\" \n viewBox=\"0 0 24 24\"\n >\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M19 9l-7 7-7-7\"><\/path>\n <\/svg>\n <\/button>\n \n \n <div \n x-show=\"open\"\n x-transition:enter=\"transition ease-out duration-200\"\n x-transition:enter-start=\"opacity-0 translate-y-1\"\n x-transition:enter-end=\"opacity-100 translate-y-0\"\n x-transition:leave=\"transition ease-in duration-150\"\n x-transition:leave-start=\"opacity-100 translate-y-0\"\n x-transition:leave-end=\"opacity-0 translate-y-1\"\n class=\"overflow-auto absolute left-0 top-full z-50 mt-1 w-full max-h-[300px] bg-white rounded-md border border-[#1A1414]\/10 shadow-lg focus:outline-none\"\n style=\"display: none;\"\n >\n <ul class=\"py-1\">\n <template x-for=\"library in libraries\" :key=\"library.value\">\n <li>\n <button\n @click=\"selectLibrary(library.value)\"\n type=\"button\"\n class=\"flex gap-2 items-center px-2 py-1.5 w-full transition-colors hover:bg-gray-50\"\n :class=\"{ 'bg-yellow-50': selectedLibrary === library.value }\"\n >\n <img \n :src=\"library.icon\" \n :alt=\"library.name\"\n :width=\"20\"\n :height=\"20\"\n class=\"flex-shrink-0 w-5 h-5\"\n \/>\n <span class=\"text-black-100 text-[14px]\" x-text=\"library.name\"><\/span>\n <span x-show=\"library.isNew\" class=\"new-badge ml-1\">New<\/span>\n <span x-show=\"selectedLibrary === library.value\" class=\"ml-auto text-yellow-400\">\n <svg class=\"w-3.5 h-3.5\" fill=\"currentColor\" viewBox=\"0 0 20 20\">\n <path fill-rule=\"evenodd\" d=\"M16.707 5.293a1 1 0 010 1.414l-8 8a1 1 0 01-1.414 0l-4-4a1 1 0 011.414-1.414L8 12.586l7.293-7.293a1 1 0 011.414 0z\" clip-rule=\"evenodd\"><\/path>\n <\/svg>\n <\/span>\n <\/button>\n <\/li>\n <\/template>\n <\/ul>\n <\/div>\n <\/div>\n <div class=\"flex items-center\">\n <span data-seed=\"4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"snippet-copy cursor-pointer flex items-center gap-1.5 px-2.5 py-1.5 text-sm text-black-100 rounded-md border border-[#1A1414]\/10 bg-white hover:bg-gray-50 transition-colors\" title=\"Copy to clipboard!\">\n <span class=\"icon-copy02 leading-none text-[14px]\"><\/span>\n <span class=\"text-[14px]\">Copy<\/span>\n <\/span>\n <\/div>\n <\/div>\n\n <div class=\"bg-[#30302F] rounded-md font-light !font-ibmplex\">\n <div id=\"curl-4ca8484abc6dc4ccb5f1d28dfef5cdce\"class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-bash\">curl \"https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt?api_key=YOUR-API-KEY&prompt=Explain&#43;the&#43;benefits&#43;of&#43;renewable&#43;energy&#43;in&#43;100&#43;words\"<\/code><\/pre>\n <\/div>\n <div id=\"python-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content active\">\n <pre><code class=\"language-python\"><pre><code class=\"language-python\"># Install the Python Requests library:\n# pip install requests\nimport requests\n\ndef send_request():\n response = requests.get(\n url='https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt',\n params={\n 'api_key': 'YOUR-API-KEY',\n 'prompt': 'Explain the benefits of renewable energy in 100 words',\n },\n\n )\n print('Response HTTP Status Code: ', response.status_code)\n print('Response HTTP Response Body: ', response.content)\nsend_request()\n<\/code><\/pre>\n<\/code><\/pre>\n <\/div>\n <div id=\"node-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-javascript\"><pre><code class=\"language-javascript\">\/\/ Install the Node Axios package\n\/\/ npm install axios\nconst axios = require('axios');\n\naxios.get('https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt', {\n params: {\n 'api_key': 'YOUR-API-KEY',\n 'url': 'YOUR-URL',\n 'prompt': Explain the benefits of renewable energy in 100 words,\n }\n}).then(function (response) {\n \/\/ handle success\n console.log(response);\n})\n<\/code><\/pre>\n<\/code><\/pre>\n <\/div>\n <div id=\"java-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-java\">import java.io.IOException;\nimport org.apache.http.client.fluent.*;\n\npublic class SendRequest\n{\n public static void main(String[] args) {\n sendRequest();\n }\n\n private static void sendRequest() {\n\n \/\/ Classic (GET )\n try {\n\n \/\/ Create request\n \n Content content = Request.Get(\"https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt?api_key=YOUR-API-KEY&url=YOUR-URL&prompt=Explain&#43;the&#43;benefits&#43;of&#43;renewable&#43;energy&#43;in&#43;100&#43;words\")\n\n \/\/ Fetch request and return content\n .execute().returnContent();\n\n \/\/ Print content\n System.out.println(content);\n }\n catch (IOException e) { System.out.println(e); }\n }\n}\n<\/code><\/pre>\n <\/div>\n <div id=\"ruby-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-ruby\">require 'net\/http'\nrequire 'net\/https'\n\n# Classic (GET )\ndef send_request \n uri = URI('https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt?api_key=YOUR-API-KEY&url=YOUR-URL&prompt=Explain&#43;the&#43;benefits&#43;of&#43;renewable&#43;energy&#43;in&#43;100&#43;words')\n\n # Create client\n http = Net::HTTP.new(uri.host, uri.port)\n http.use_ssl = true\n http.verify_mode = OpenSSL::SSL::VERIFY_PEER\n\n # Create Request\n req = Net::HTTP::Get.new(uri)\n\n # Fetch Request\n res = http.request(req)\n puts \"Response HTTP Status Code: #{ res.code }\"\n puts \"Response HTTP Response Body: #{ res.body }\"\nrescue StandardError => e\n puts \"HTTP Request failed (#{ e.message })\"\nend\n\nsend_request()<\/code><\/pre>\n <\/div>\n <div id=\"php-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-php\">&lt;?php\n\n\/\/ get cURL resource\n$ch = curl_init();\n\n\/\/ set url \ncurl_setopt($ch, CURLOPT_URL, 'https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt?api_key=YOUR-API-KEY&url=YOUR-URL&prompt=Explain&#43;the&#43;benefits&#43;of&#43;renewable&#43;energy&#43;in&#43;100&#43;words');\n\n\/\/ set method\ncurl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');\n\n\/\/ return the transfer as a string\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);\n\n\n\n\/\/ send the request and save response to $response\n$response = curl_exec($ch);\n\n\/\/ stop if fails\nif (!$response) {\n die('Error: \"' . curl_error($ch) . '\" - Code: ' . curl_errno($ch));\n}\n\necho 'HTTP Status Code: ' . curl_getinfo($ch, CURLINFO_HTTP_CODE) . PHP_EOL;\necho 'Response Body: ' . $response . PHP_EOL;\n\n\/\/ close curl resource to free up system resources\ncurl_close($ch);\n&gt;<\/code><\/pre>\n <\/div>\n <div id=\"go-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-go\">package main\n\nimport (\n\t\"fmt\"\n\t\"io\/ioutil\"\n\t\"net\/http\"\n)\n\nfunc sendClassic() {\n\t\/\/ Create client\n\tclient := &http.Client{}\n\n\t\/\/ Create request \n\treq, err := http.NewRequest(\"GET\", \"https:\/\/app.scrapingbee.com\/api\/v1\/chatgpt?api_key=YOUR-API-KEY&url=YOUR-URL&prompt=Explain&#43;the&#43;benefits&#43;of&#43;renewable&#43;energy&#43;in&#43;100&#43;words\", nil)\n\n\n\tparseFormErr := req.ParseForm()\n\tif parseFormErr != nil {\n\t\tfmt.Println(parseFormErr)\n\t}\n\n\t\/\/ Fetch Request\n\tresp, err := client.Do(req)\n\n\tif err != nil {\n\t\tfmt.Println(\"Failure : \", err)\n\t}\n\n\t\/\/ Read Response Body\n\trespBody, _ := ioutil.ReadAll(resp.Body)\n\n\t\/\/ Display Results\n\tfmt.Println(\"response Status : \", resp.Status)\n\tfmt.Println(\"response Headers : \", resp.Header)\n\tfmt.Println(\"response Body : \", string(respBody))\n}\n\nfunc main() {\n sendClassic()\n}<\/code><\/pre>\n <\/div>\n <div id=\"cli-4ca8484abc6dc4ccb5f1d28dfef5cdce\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-bash\"># Install the ScrapingBee CLI:\n# pip install scrapingbee-cli\n\nscrapingbee chatgpt \"Explain the benefits of renewable energy in 100 words\"\n<\/code><\/pre>\n <\/div>\n <\/div>\n<\/div>\n\n<p>Here is a breakdown of all the parameters you can use with the GPT API:<\/p>"},{"title":"Grailed Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/grailed-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/grailed-api\/","description":{}},{"title":"Grocery Data Scraper API - Free Signup and Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/grocery-data-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/grocery-data-api\/","description":{}},{"title":"Gumroad Scraper API - Easy Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/gumroad-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/gumroad-scraper-api\/","description":{}},{"title":"Gumtree Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/gumtree-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/gumtree-api\/","description":{}},{"title":"H&M Scraper API - Easy Access & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/hm-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/hm-scraper-api\/","description":{}},{"title":"Home Depot Scraper API","link":"https:\/\/www.scrapingbee.com\/scrapers\/homedepot-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/homedepot-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Access Home Depot\\u0027s vast product catalog with our powerful web scraping API. Get prices, specifications, and availability across product categories with unmatched reliability.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Home Depot Scraper API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Home Depot Scraper API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Access Home Depot&#39;s vast product catalog with our powerful web scraping API. Get prices, specifications, and availability across product categories with unmatched reliability.<\/p>"},{"title":"Hotels.com Scraper API - Get Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/hotels.com-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/hotels.com-api\/","description":{}},{"title":"Housesigma Scraper API - Free Credits Upon Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/housesigma-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/housesigma-api\/","description":{}},{"title":"Houzz Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/houzz-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/houzz-api\/","description":{}},{"title":"How to extract a table's content in NodeJS","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-a-tables-content-in-nodejs\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-a-tables-content-in-nodejs\/","description":"<p>Data can be found online in various formats, but the most popular one is table format, especially that it displays information in a very structured and well organized layout. So it is very important to be able to extract data from tables with ease.<\/p>\n<p>And this is of the most important features of ScrapingBee's data extraction tool, you can scrape data from tables without having to do any post-processing of the HTML response. We can use this feature by specifying a table's CSS selector within a set of <code>extract_rules<\/code>, and let ScrapingBee do the rest!<\/p>"},{"title":"How to extract a table's content in Python","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-a-tables-content-in-python\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-a-tables-content-in-python\/","description":"<p>Data can be found online in various formats, but the most popular one is table format, especially that it displays information in a very structured and well organized layout. So it is very important to be able to extract data from tables with ease.\u00a0<\/p>\n<p>And this is of the most important features of ScrapingBee's data extraction tool, you can scrape data from tables without having to do any post-processing of the HTML response. We can use this feature by specifying a table's CSS selector within a set of\u00a0<code>extract_rules<\/code>, and let ScrapingBee do the rest!<\/p>"},{"title":"How to extract a table's content in Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-a-tables-content-in-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-a-tables-content-in-ruby\/","description":"<p>Data can be found online in various formats, but the most popular one is table format, especially that it displays information in a very structured and well organized layout. So it is very important to be able to extract data from tables with ease.<\/p>\n<p>And this is of the most important features of ScrapingBee's data extraction tool, you can scrape data from tables without having to do any post-processing of the HTML response. We can use this feature by specifying a table's CSS selector within a set of <code>extract_rules<\/code>, and let ScrapingBee do the rest!<\/p>"},{"title":"How to extract CSS selectors using Chrome","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-css-selectors-using-chrome\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-extract-css-selectors-using-chrome\/","description":"<p>Finding the CSS selector of an element you want to scrape can be tricky at times. This is why we can use the\u00a0<strong>Inspect Element<\/strong>\u00a0feature in most modern browsers to extract the selector with ease.<\/p>\n<p>The process is very simple, first we find the element, right click on it, we then click on Inspect Element. The developer tools window will show up with the element highlighted. We then right click on the selected HTML code, go to Copy, and click on Copy selector.<\/p>"},{"title":"How to handle infinite scroll pages in Go","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-go\/","description":"<p>Nowadays, most websites use different methods and techniques to decrease the load and data served to their clients\u2019 devices. One of these techniques is the infinite scroll.<\/p>\n<p>In this tutorial, we will see how we can scrape infinite scroll web pages using a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >js_scenario<\/a>, specifically the&amp;\u00a0<code>scroll_y<\/code>\u00a0and\u00a0<code>scroll_x<\/code>\u00a0features. And we will use\u00a0<a href=\"https:\/\/demo.scrapingbee.com\/infinite_scroll.html\" >this page<\/a>\u00a0as a demo. Only 9 boxes are loaded when we first open the page, but as soon as we scroll to the end of it, we will load 9 more, and that will keep happening each time we scroll to the bottom of the page.<\/p>"},{"title":"How to handle infinite scroll pages in NodeJS","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-nodejs\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-nodejs\/","description":"<p>Nowadays, most websites use different methods and techniques to decrease the load and data served to their clients\u2019 devices. One of these techniques is the infinite scroll.<\/p>\n<p>In this tutorial, we will see how we can scrape infinite scroll web pages using a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >js_scenario<\/a>, specifically the\u00a0<code>scroll_y<\/code>\u00a0and\u00a0<code>scroll_x<\/code>\u00a0features. And we will use\u00a0<a href=\"https:\/\/demo.scrapingbee.com\/infinite_scroll.html\" >this page<\/a>\u00a0as a demo. Only 9 boxes are loaded when we first open the page, but as soon as we scroll to the end of it, we will load 9 more, and that will keep happening each time we scroll to the bottom of the page.<\/p>"},{"title":"How to handle infinite scroll pages in PHP","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-php\/","description":"<p>Nowadays, most websites use different methods and techniques to decrease the load and data served to their clients\u2019 devices. One of these techniques is the infinite scroll.<\/p>\n<p>In this tutorial, we will see how we can scrape infinite scroll web pages using a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >js_scenario<\/a>, specifically the\u00a0<code>scroll_y<\/code>\u00a0and\u00a0<code>scroll_x<\/code>\u00a0features. And we will use\u00a0<a href=\"https:\/\/demo.scrapingbee.com\/infinite_scroll.html\" >this page<\/a>\u00a0as a demo. Only 9 boxes are loaded when we first open the page, but as soon as we scroll to the end of it, we will load 9 more, and that will keep happening each time we scroll to the bottom of the page.<\/p>"},{"title":"How to handle infinite scroll pages in Python","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages\/","description":"<p>Nowadays, most websites use different methods and techniques to decrease the load and data served to their clients\u2019 devices. One of these techniques is the infinite scroll.<\/p>\n<p>In this tutorial, we will see how we can scrape infinite scroll web pages using a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >js_scenario<\/a>, specifically the\u00a0<code>scroll_y<\/code>\u00a0and\u00a0<code>scroll_x<\/code>\u00a0features. And we will use\u00a0<a href=\"https:\/\/demo.scrapingbee.com\/infinite_scroll.html\" >this page<\/a>\u00a0as a demo. Only 9 boxes are loaded when we first open the page, but as soon as we scroll to the end of it, we will load 9 more, and that will keep happening each time we scroll to the bottom of the page.<\/p>"},{"title":"How to handle infinite scroll pages in Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-handle-infinite-scroll-pages-in-ruby\/","description":"<p>Nowadays, most websites use different methods and techniques to decrease the load and data served to their clients\u2019 devices. One of these techniques is the infinite scroll.<\/p>\n<p>In this tutorial, we will see how we can scrape infinite scroll web pages using a\u00a0<a href=\"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/\" >js_scenario<\/a>, specifically the\u00a0<code>scroll_y<\/code>\u00a0and\u00a0<code>scroll_x<\/code>\u00a0features. And we will use\u00a0<a href=\"https:\/\/demo.scrapingbee.com\/infinite_scroll.html\" >this page<\/a>\u00a0as a demo. Only 9 boxes are loaded when we first open the page, but as soon as we scroll to the end of it, we will load 9 more, and that will keep happening each time we scroll to the bottom of the page.<\/p>"},{"title":"How to make screenshots in C#","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-c\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-c\/","description":"<p>Taking a screenshot of your website is very straightforward using ScrapingBee. You can either take a screenshot of the visible portion of the page, the whole page, or an element of the page.<\/p>\n<p>That can be done by specifying one of these parameters with your request:<\/p>\n<ul>\n<li><code>screenshot<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_full_page<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_selector<\/code>\u00a0to the CSS selector of the element.<\/li>\n<\/ul>\n<p>In this tutorial, we will see how to take a screenshot of ScrapingBee\u2019s\u00a0<a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >blog<\/a>\u00a0using the three methods.\u00a0<\/p>"},{"title":"How to make screenshots in Go","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-go\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-go\/","description":"<p>Taking a screenshot of your website is very straightforward using ScrapingBee. You can either take a screenshot of the visible portion of the page, the whole page, or an element of the page.<\/p>\n<p>That can be done by specifying one of these parameters with your request:<\/p>\n<ul>\n<li><code>screenshot<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_full_page<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_selector<\/code>\u00a0to the CSS selector of the element.<\/li>\n<\/ul>\n<p>In this tutorial, we will see how to take a screenshot of ScrapingBee\u2019s\u00a0<a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >blog<\/a>\u00a0using the three methods.<\/p>"},{"title":"How to make screenshots in NodeJS","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-nodejs\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-nodejs\/","description":"<p>Taking a screenshot of your website is very straightforward using ScrapingBee. You can either take a screenshot of the visible portion of the page, the whole page, or an element of the page.<\/p>\n<p>That can be done by specifying one of these parameters with your request:<\/p>\n<ul>\n<li><code>screenshot<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_full_page<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_selector<\/code>\u00a0to the CSS selector of the element.<\/li>\n<\/ul>\n<p>In this tutorial, we will see how to take a screenshot of ScrapingBee\u2019s\u00a0<a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >blog<\/a>\u00a0using the three methods.<\/p>"},{"title":"How to make screenshots in PHP","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-php\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-php\/","description":"<p>Taking a screenshot of your website is very straightforward using ScrapingBee. You can either take a screenshot of the visible portion of the page, the whole page, or an element of the page.<\/p>\n<p>That can be done by specifying one of these parameters with your request:<\/p>\n<ul>\n<li><code>screenshot<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_full_page<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_selector<\/code>\u00a0to the CSS selector of the element.<\/li>\n<\/ul>\n<p>In this tutorial, we will see how to take a screenshot of ScrapingBee\u2019s\u00a0<a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >blog<\/a>\u00a0using the three methods.\u00a0<\/p>"},{"title":"How to make screenshots in Python","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-python\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-python\/","description":"<p>Taking a screenshot of your website is very straightforward using ScrapingBee. You can either take a screenshot of the visible portion of the page, the whole page, or an element of the page.<\/p>\n<p>That can be done by specifying one of these parameters with your request:<\/p>\n<ul>\n<li><code>screenshot<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_full_page<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_selector<\/code>\u00a0to the CSS selector of the element.<\/li>\n<\/ul>\n<p>In this tutorial, we will see how to take a screenshot of ScrapingBee\u2019s\u00a0<a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >blog<\/a>\u00a0using the three methods.<\/p>"},{"title":"How to make screenshots in Ruby","link":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-ruby\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/tutorials\/how-to-make-screenshots-in-ruby\/","description":"<p>Taking a screenshot of your website is very straightforward using ScrapingBee. You can either take a screenshot of the visible portion of the page, the whole page, or an element of the page.<\/p>\n<p>That can be done by specifying one of these parameters with your request:<\/p>\n<ul>\n<li><code>screenshot<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_full_page<\/code>\u00a0to\u00a0<strong>true<\/strong>\u00a0or\u00a0<strong>false<\/strong>.<\/li>\n<li><code>screenshot_selector<\/code>\u00a0to the CSS selector of the element.<\/li>\n<\/ul>\n<p>In this tutorial, we will see how to take a screenshot of ScrapingBee\u2019s\u00a0<a href=\"https:\/\/www.scrapingbee.com\/blog\/\" >blog<\/a>\u00a0using the three methods.<br>\u00a0<\/p>"},{"title":"Idealista Scraper API Tool - Free Credits & Easy Setup","link":"https:\/\/www.scrapingbee.com\/scrapers\/idealista-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/idealista-api\/","description":{}},{"title":"Igtlan.com Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/ingatlan.com-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/ingatlan.com-api\/","description":{}},{"title":"Images Results Scraper API - Simplified & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/images-results-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/images-results-api\/","description":{}},{"title":"Imdb Scraper API - Easy Signup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/imdb-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/imdb-scraper-api\/","description":{}},{"title":"Immoweb Scraper API - Get Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/immoweb-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/immoweb-api\/","description":{}},{"title":"Investopedia Scraper API - Simple Signup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/investopedia-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/investopedia-scraper-api\/","description":{}},{"title":"IPRoyal alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/iproyal-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/iproyal-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">IPRoyal alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a modern alternative to IPRoyal. Want a production-ready web scraping API that\u2019s simple to integrate, easy to scale, and designed for clean outputs? ScrapingBee turns any URL into reliable HTML, JSON, or LLM-ready Markdown.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">All-in-one scraping API, built for production.<\/h3>\n <p>ScrapingBee helps you collect web data reliably with one clean API call\u2014HTML, extracted JSON, or LLM-ready Markdown\u2014while we handle rendering, proxies, and anti-bot hurdles.<\/p>"},{"title":"JavaScript Scenario","link":"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/js-scenario\/","description":"<blockquote>\n<p>\ud83d\udca1 <strong>Important<\/strong>:<br>This page explains how to use a specific feature of our main <a href=\"https:\/\/www.scrapingbee.com\/\" >web scraping API<\/a>!<br>If you are not yet familiar with ScrapingBee web scraping API, you can read the documentation <a href=\"https:\/\/www.scrapingbee.com\/documentation\" >here<\/a>.<\/p>\n<\/blockquote>\n<h2 id=\"basic-usage\">Basic usage<\/h2>\n<p>If you want to interact with pages you want to scrape before we return your the HTML you can add JavaScript scenario to your API call.<\/p>\n<p>For example, if you wish to click on a button, you will need to use this scenario.<\/p>"},{"title":"JavaScript Web Scraping API","link":"https:\/\/www.scrapingbee.com\/features\/javascript-scenario\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/javascript-scenario\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Web scraping using JavaScript has never been more simple. Need to scroll, click, fill inputs or else? - We\\u0027ve got you covered.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">JavaScript Web Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Web scraping using JavaScript has never been more simple. Need to scroll, click, fill inputs or else? - We&#39;ve got you covered.<\/p>"},{"title":"Jiosaavn Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/jiosaavn-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/jiosaavn-api\/","description":{}},{"title":"Kadoa alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/kadoa-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/kadoa-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Kadoa alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a developer-first alternative to Kadoa. Need a fast, controllable scraping API that outputs clean HTML, JSON, or Markdown\/text? ScrapingBee keeps your stack simple and automation-ready.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Simple integration. Precise control.<\/h3>\n <p>ScrapingBee turns any URL into reliable web data with practical features like Proxy Mode, XHR interception, element screenshots, and clean outputs for your pipeline.<\/p>"},{"title":"Kayak Scraper API - Free Credits with Simple Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/kayak-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/kayak-api\/","description":{}},{"title":"Kickstarter Scraper API - Simple Use & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/kickstarter-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/kickstarter-scraper-api\/","description":{}},{"title":"Kiwi.Com Scraper API - Easy Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/kiwi.com-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/kiwi.com-scraper-api\/","description":{}},{"title":"Kleinanzeigen Scraper API - Get Free Credits at Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/kleinanzeigen-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/kleinanzeigen-api\/","description":{}},{"title":"Lazada Data Scraper API Tool - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/lazada-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/lazada-api\/","description":{}},{"title":"Legal Notices","link":"https:\/\/www.scrapingbee.com\/legal-notices\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/legal-notices\/","description":"<h2 id=\"1-website-publisher\">1. Website Publisher<\/h2>\n<p>VostokInc, a joint-stock company (\u201csoci\u00e9t\u00e9 par actions simplifi\u00e9e\u201d) with registered address located at 66 Avenue des Champs \u00c9lys\u00e9es \u2013 75008 Paris and registered before the Company House of Paris under number 843 352 683, is the publisher of the website <a href=\"https:\/\/www.scrapingbee.com\/\" >https:\/\/www.scrapingbee.com\/<\/a> (the \u201cWebsite\u201d).<\/p>\n<p>Email: <a href=\"mailto:contact@scrapingbee.com\" >contact@scrapingbee.com<\/a><\/p>\n<p>The Publishing Director is Kevin SAHIN as legal representative of VostokInc.<\/p>\n<h2 id=\"2-hosting-provider\">2. Hosting provider<\/h2>\n<p>The website is hosted with NETLIFY:<br>\n<strong>Address:<\/strong><br>\nNetlify Inc.<br>\n512 2nd Street Fl 2<br>\nSan Francisco CA 94107<br>\nUSA<br>\n<strong>Contact information:<\/strong> <a href=\"mailto:fraud@netlify.com\" >fraud@netlify.com<\/a><\/p>"},{"title":"Local Results Scraper API - Simplified Access & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/local-results-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/local-results-api\/","description":{}},{"title":"LoopNet Scraper API - Simple Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/loopnet-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/loopnet-api\/","description":{}},{"title":"Lowes Scraper API - Simplified Access, Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/lowes-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/lowes-api\/","description":{}},{"title":"Luminati alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/luminati-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/luminati-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Luminati alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">Stop paying exorbitant fees for web scraping. Get all the data you need at a drastically better price.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Powerful proxies, without the ridiculous price tag.<\/h3>\n <p>ScrapingBee starts at only $29\/mo compared to Luminati&#39;s outragous prices, see for yourself! And you always know what you\u2019re going to pay. No surprises!<\/p>"},{"title":"Macys Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/macys-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/macys-api\/","description":{}},{"title":"Magicbricks Scraper API - Get Free Credits at Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/magicbricks-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/magicbricks-api\/","description":{}},{"title":"Marinetraffic Scraper API - Simple Setup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/marinetraffic-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/marinetraffic-api\/","description":{}},{"title":"Marketplace Scraper with Free Credits - Easy Setup and Use","link":"https:\/\/www.scrapingbee.com\/scrapers\/marketplace-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/marketplace-api\/","description":{}},{"title":"Mediamarkt Scraper API - Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/mediamarkt-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/mediamarkt-api\/","description":{}},{"title":"Medium Scraper API - Effortless Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/medium-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/medium-api\/","description":{}},{"title":"Meesho Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/meesho-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/meesho-scraper-api\/","description":{}},{"title":"Meetup Scraper API - Free Credits Available","link":"https:\/\/www.scrapingbee.com\/scrapers\/meetup-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/meetup-api\/","description":{}},{"title":"Mercadolibre Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/mercadolibre-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/mercadolibre-api\/","description":{}},{"title":"Mercari Scraper API - Simple Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/mercari-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/mercari-api\/","description":{}},{"title":"MLS Scraper with Free Credits - Easy-to-Use Data Extraction","link":"https:\/\/www.scrapingbee.com\/scrapers\/multiple-listing-service-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/multiple-listing-service-api\/","description":{}},{"title":"Monster Scraper API - Free Signup, Credits Included","link":"https:\/\/www.scrapingbee.com\/scrapers\/monster-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/monster-api\/","description":{}},{"title":"Mozenda alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/mozenda-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/mozenda-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Mozenda alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a modern alternative to Mozenda. Need a web scraping workflow that\u2019s API-first, fast to integrate, and built to scale? ScrapingBee turns any URL into reliable HTML, JSON, or extracted data-without managing browsers or proxy pools.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Fast setup. Predictable scaling.<\/h3>\n <p>ScrapingBee gives you a clean, production-ready scraping API\u2014so you can go from URL to usable data with minimal setup and maximum reliability.<\/p>"},{"title":"Myntra Scraper API - Free Credits Available","link":"https:\/\/www.scrapingbee.com\/scrapers\/myntra-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/myntra-api\/","description":{}},{"title":"Naukri Scraper API - Free Credits Available at Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/naukri-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/naukri-api\/","description":{}},{"title":"Naver Images Scraper API - Get Free Credits Now","link":"https:\/\/www.scrapingbee.com\/scrapers\/naver-images-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/naver-images-api\/","description":{}},{"title":"Naver Search Results Scraper API - Simple Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/naver-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/naver-search-api\/","description":{}},{"title":"Netflix Scraper API - Free Starting Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/netflix-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/netflix-api\/","description":{}},{"title":"Netnut alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/netnut-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/netnut-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Netnut alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Netnut. Avoid paying exorbitant rates for your web scraping.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee simplified our <strong>day-to-day marketing and engineering operations a lot<\/strong>. We no longer have to worry about managing our own fleet of headless browsers, and we no longer have to spend days sourcing the right proxy provider<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/mike.png\" alt=\"Mike Ritchie\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Mike Ritchie\n \n <\/strong>\n \n <span class=\"text-[15px] block\">CEO @ <a href=\"https:\/\/seekwell.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">SeekWell<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Strong proxies, without the ludicrous price tag.<\/h3>\n <p>Compared to Netnut&#39;s outrageous rates, ScrapingBee begins at only $29\/mo, see for yourself! And you will always know what you&#39;ll pay for. No surprises whatsoever!<\/p>"},{"title":"Newegg Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/newegg-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/newegg-api\/","description":{}},{"title":"News Results Scraper API - Simplicity & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/news-results-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/news-results-api\/","description":{}},{"title":"Nextdoor Scraper API - Easy Signup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/nextdoor-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/nextdoor-scraper-api\/","description":{}},{"title":"Nike Scraper API - Simple Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/nike-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/nike-scraper-api\/","description":{}},{"title":"Nimble alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/nimble-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/nimble-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Nimble alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Nimble. When simplicity, performance, and cost matter\u2014some tools just don\u2019t stack up.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">More than contacts. Scrape anything.<\/h3>\n <p>Nimble is great if you only want lead data. But if you need broader scraping\u2014products, listings, news\u2014Nimble won\u2019t cut it.<\/p>"},{"title":"No Code Web Scraper - Make Integration","link":"https:\/\/www.scrapingbee.com\/features\/make\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/make\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Enjoy no code web scraping with ScrapingBee. Integrate with most of your mainstream tools.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">No Code Web Scraper - Make Integration<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Enjoy no code web scraping with ScrapingBee. Integrate with most of your mainstream tools.<\/p>"},{"title":"No Code Web Scraper - n8n Integration","link":"https:\/\/www.scrapingbee.com\/features\/n8n\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/n8n\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Enjoy no code web scraping with ScrapingBee. Integrate with n8n to automate your workflows.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">No Code Web Scraper - n8n Integration<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Enjoy no code web scraping with ScrapingBee. Integrate with n8n to automate your workflows.<\/p>"},{"title":"No Code Web Scraper - Zapier Integration","link":"https:\/\/www.scrapingbee.com\/features\/zapier\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/zapier\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Enjoy no code web scraping with ScrapingBee. Integrate with most of your mainstream tools.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">No Code Web Scraper - Zapier Integration<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Enjoy no code web scraping with ScrapingBee. Integrate with most of your mainstream tools.<\/p>"},{"title":"Octoparse alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/octoparse-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/octoparse-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Octoparse alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Octoparse. If your current scraping solution feels limited or overpriced, it might be time for a change.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Outgrown point-and-click tools? You're not alone.<\/h3>\n <p>Visual scrapers like Octoparse are great for beginners\u2014but quickly become painful to scale. If you're tired of GUIs, ScrapingBee gives you API-first power and flexibility.<\/p>"},{"title":"Offerup Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/offerup-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/offerup-api\/","description":{}},{"title":"OLX Scraper API - Easy Signup, Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/olx-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/olx-api\/","description":{}},{"title":"Onthemarket Scraper API - Simple Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/onthemarket-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/onthemarket-scraper-api\/","description":{}},{"title":"OpenAI Scraper API - Signup for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/openai-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/openai-api\/","description":{}},{"title":"Opensea Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/opensea-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/opensea-api\/","description":{}},{"title":"Opentable Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/opentable-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/opentable-api\/","description":{}},{"title":"Otodom Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/otodom-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/otodom-scraper-api\/","description":{}},{"title":"Oxylabs alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/oxylabs-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/oxylabs-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Oxylabs alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Oxylabs. Looking for more reliable, affordable, and scalable scraping solutions without the complexity?<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just proxies. Not just high costs.<\/h3>\n <p>Oxylabs offers great proxy services\u2014but at a premium. Why pay more for limited proxy pools when you can access the full web for less?<\/p>"},{"title":"ParseHub alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/parsehub-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/parsehub-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ParseHub alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to ParseHub. If you&#39;re seeking a more user-friendly interface, better pricing, and increased functionality, it may be time to explore other options.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No GUI. No complexity. Just powerful APIs.<\/h3>\n <p>ParseHub is perfect for beginners, but if you need something scalable and flexible, you need an API-first approach with powerful customization.<\/p>"},{"title":"Patreon Scraper API Tool - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/patreon-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/patreon-api\/","description":{}},{"title":"PhantomBuster alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/phantombuster-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/phantombuster-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">PhantomBuster alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a modern alternative to PhantomBuster for teams that want an API-first scraping workflow. Turn any URL into HTML, extracted JSON, or clean Markdown-without managing browsers, proxies, or brittle parsing code.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">API-first scraping that\u2019s easy to deploy.<\/h3>\n <p>ScrapingBee turns any URL into reliable web data with clean options for rendering and extraction\u2014so your workflows stay simple, debuggable, and ready to scale.<\/p>"},{"title":"Phone Number Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/phone-number\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/phone-number\/","description":{}},{"title":"Pitchbook Scraper API - Easy Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/pitchbook-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/pitchbook-scraper-api\/","description":{}},{"title":"Polymarket Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/polymarket-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/polymarket-api\/","description":{}},{"title":"Poshmark Scraper API - Get Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/poshmark-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/poshmark-api\/","description":{}},{"title":"Pricing - ScrapingBee Web Scraping API","link":"https:\/\/www.scrapingbee.com\/pricing\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/pricing\/","description":{}},{"title":"Privacy Policy","link":"https:\/\/www.scrapingbee.com\/privacy-policy\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/privacy-policy\/","description":"<h2 id=\"1-preamble\">1. Preamble<\/h2>\n<p>The purpose of this privacy policy (the <strong>\u201cPrivacy Policy\u201d<\/strong>) is to inform future prospects, customers, consultants, partners, service providers, and suppliers (including their employees) and more generally anyone browsing VostokInc's website at the following address <a href=\"https:\/\/www.scrapingbee.com\/\" >https:\/\/www.scrapingbee.com\/<\/a> (the <strong>\u201cWebsite\u201d<\/strong>) and use VostokInc\u2019s services about how VostokInc, a joint-stock company (\u201csoci\u00e9t\u00e9 par actions simplifi\u00e9e\u201d) with registered address located at 66 Avenue des Champs \u00c9lys\u00e9es \u2013 75008 Paris \u2013 France and registered before the Company House of Paris under number 843 352 683 (<strong>\u201cVostokInc\u201d<\/strong> or <strong>\u201cWe\u201d<\/strong>) processes Personal Data in its capacity as data controllers and their rights in this respect.<\/p>"},{"title":"Product Hunt Scraper API - Easy Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/product-hunt-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/product-hunt-scraper-api\/","description":{}},{"title":"Properati Scraper API - Easy Signup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/properati-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/properati-scraper-api\/","description":{}},{"title":"Propertyguru Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/propertyguru-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/propertyguru-api\/","description":{}},{"title":"Proxy Mode","link":"https:\/\/www.scrapingbee.com\/documentation\/proxy-mode\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/proxy-mode\/","description":"<h2 id=\"what-is-the-proxy-mode\">What is the proxy mode?<\/h2>\n<p>ScrapingBee also offers a proxy front-end to the API. This can make integration with third-party tools easier. The Proxy mode only changes the way you access ScrapingBee. The ScrapingBee API will then handle requests just like any standard request.<\/p>\n<p>Request cost, return code and default parameters will be the same as a standard no-proxy request.<\/p>\n<p>We recommend disabling <a href=\"#javascript-rendering\" >Javascript rendering<\/a> in proxy mode, which is enabled by default. The following credentials and configurations are used to access the proxy mode:<\/p>"},{"title":"PubMed Scraper API - Signup for Credits Free","link":"https:\/\/www.scrapingbee.com\/scrapers\/pubmed-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/pubmed-api\/","description":{}},{"title":"Quora Scraper API - Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/quora-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/quora-api\/","description":{}},{"title":"Rakuten Scraper API - Simple Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/rakuten-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/rakuten-scraper-api\/","description":{}},{"title":"Realtor.ca Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/realtor.ca-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/realtor.ca-api\/","description":{}},{"title":"Rebranding","link":"https:\/\/www.scrapingbee.com\/rebranding\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/rebranding\/","description":"<p>It's not a big change so you might wonder what's wrong with the Ninja?<\/p>\n<p>Why did we fall in love with the Bee \ud83d\udc1d?<\/p>\n<p>First, our company is based in France.<\/p>\n<p>We have strong legislation regarding trademark and domain name usage.<\/p>\n<p>Before launching ScrapingNinja we brainstormed a lot of different names, look at the different available domain names, and checked in different databases like <a href=\"https:\/\/www.inpi.fr\/fr\" >https:\/\/www.inpi.fr\/fr<\/a> and other European brand databases to make sure our domain\/brand was unique.<\/p>"},{"title":"Redfin Scraper API Tool - Simple Setup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/redfin-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/redfin-api\/","description":{}},{"title":"Redirecting...","link":"https:\/\/www.scrapingbee.com\/features\/google-shopping-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/google-shopping-api\/","description":{}},{"title":"Reuters Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/reuters-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/reuters-api\/","description":{}},{"title":"Review Scraper API Tool - Simple Setup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/review-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/review-api\/","description":{}},{"title":"Rightmove Scraper API - Free Credits Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/rightmove-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/rightmove-api\/","description":{}},{"title":"Rotten Tomatoes Scraper API - Simple Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/rotten-tomatoes-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/rotten-tomatoes-scraper-api\/","description":{}},{"title":"RSS Scraper API Tool - Easy Setup & Free Starting Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/rss-feed-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/rss-feed-api\/","description":{}},{"title":"Rumble Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/rumble-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/rumble-scraper-api\/","description":{}},{"title":"Sainsburys Scraper API - Free Credits When You Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/sainsburys-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/sainsburys-api\/","description":{}},{"title":"Scrape Google Recipes Scraper API - Get Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-recipes-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-recipes-api\/","description":{}},{"title":"Scrape Google Short Videos - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/google-short-videos-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/google-short-videos-api\/","description":{}},{"title":"Scrape Nasdaq with API - Free Signup and Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/nasdaq-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/nasdaq-api\/","description":{}},{"title":"Scrape Tripadvisor with API - Free Signup and Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/tripadvisor-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tripadvisor-api\/","description":{}},{"title":"Scrape.do alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrape-do-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrape-do-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Scrape.do alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Scrape.do. Simplifying web scraping shouldn\u2019t mean sacrificing speed or reliability\u2014check out alternatives that give you more for less.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">More than a scraper\u2014just pure data access.<\/h3>\n <p>Scrape.do may offer basic scraping, but it limits access to powerful tools. We give you everything you need\u2014no upsells.<\/p>"},{"title":"ScrapeHero alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapehero-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapehero-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ScrapeHero alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to ScrapeHero. Struggling with your current scraping solution? It\u2019s time to switch to a more efficient and affordable alternative.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No gimmicks. Just web scraping at scale.<\/h3>\n <p>ScrapeHero offers scraping, but it's often locked behind complicated plans. Get what you need without all the complexity.<\/p>"},{"title":"ScrapeOwl alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapeowl-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapeowl-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ScrapeOwl alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to ScrapeOwl. Powerful scraping should be straightforward, cost-efficient, and easy to integrate into your workflow\u2014let\u2019s look at better alternatives.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just limited scrapes. Full scraping flexibility.<\/h3>\n <p>ScrapeOwl provides basic scraping, but if you need something more powerful and customizable, you need a better alternative.<\/p>"},{"title":"ScraperAPI alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scraperapi-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scraperapi-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ScraperAPI alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to ScraperAPI. A better web scraping API, for 50% less.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">More, for less!<\/h3>\n <p>Compared to ScraperAPI, ScrapingBee offers much more at a way better price!<\/p>"},{"title":"ScrapeStorm alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapestorm-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapestorm-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ScrapeStorm alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">Looking for a ScrapeStorm alternative that\u2019s API-first and automation-friendly? ScrapingBee turns any URL into reliable HTML, extracted JSON, or clean Markdown-so you can ship web data workflows fast.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Built for automation. Ready for production.<\/h3>\n <p>ScrapingBee is a developer-first scraping API that turns any URL into reliable data\u2014HTML, extracted JSON, or clean Markdown\u2014while we handle rendering and proxy routing behind the scenes.<\/p>"},{"title":"Scrapfly alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapfly-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapfly-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Scrapfly alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Scrapfly. Need a scraping solution that\u2019s faster, more flexible, and less complicated? There are better options out there.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No limited plans. No hidden extras.<\/h3>\n <p>Scrapfly might offer robust scraping, but it's complicated and pricey. We make it simple\u2014scrape without unnecessary costs.<\/p>"},{"title":"Scraping Fish alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapingfish-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapingfish-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Scraping Fish alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Scraping Fish. Web scraping should be easy, cost-effective, and hassle-free. It&#39;s time to consider alternatives that better meet your needs.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just proxies. Real scraping solutions.<\/h3>\n <p>ScrapingFish focuses on proxies but doesn't give you the flexibility needed for broader scraping. We offer much more than just IPs\u2014get full scraping power without the added cost.<\/p>"},{"title":"ScrapingAnt alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapingant-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapingant-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ScrapingAnt alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to ScrapingAnt. Tired of dealing with complex setups or overblown pricing? Explore alternatives that make web scraping simpler and more affordable.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just proxies or scraping\u2014complete solutions.<\/h3>\n <p>ScrapingAnt might offer simple scraping, but we give you everything you need\u2014reliable APIs, proxy support, and advanced features.<\/p>"},{"title":"ScrapingBee alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapingbee-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapingbee-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ScrapingBee alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is the most efficient web scraping API out there. Let&#39;s see how it compares to the other big names.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee simplified our <strong>day-to-day marketing and engineering operations a lot<\/strong>. We no longer have to worry about managing our own fleet of headless browsers, and we no longer have to spend days sourcing the right proxy provider<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/mike.png\" alt=\"Mike Ritchie\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Mike Ritchie\n \n <\/strong>\n \n <span class=\"text-[15px] block\">CEO @ <a href=\"https:\/\/seekwell.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">SeekWell<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-col\">\n <div class=\"-my-[2px] overflow-x-auto sm:-mx-[6px] lg:-mx-[8px]\">\n <div class=\"py-[2px] align-middle inline-block min-w-full sm:px-[24px] lg:px-[8px]\">\n <div class=\"overflow-hidden border-b border-gray-200\">\n <table class=\"min-w-full divide-y divide-gray-200\">\n <thead class=\"bg-black-100 text-white\">\n <tr>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-left text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Service\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n API\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Proxy Mode\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Geolocation\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Price per GB\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Minimum monthly commitment\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Success Rate **\n <\/th>\n <th scope=\"col\" class=\"px-[24px] py-[3px] text-center text-xs font-20 text-gray-500 uppercase tracking-wider\">\n Average query duration **\n <\/th>\n <\/tr>\n <\/thead>\n <tbody class=\"bg-white divide-y divide-gray-200\">\n <tr>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap\">\n <div class=\"flex items-center\">\n <div class=\"flex-shrink-0 h-[30px] w-[30px]\">\n <img class=\"h-[30px] w-[30px]\" src=\"https:\/\/www.scrapingbee.com\/images\/favico.png\" alt=\"Photo of Favico\">\n <\/div>\n <div class=\"ml-[20px]\">\n <div class=\"text-[20px] font-weight-bold text-black-100\">\n ScrapingBee\n <\/div>\n <\/div>\n <\/div>\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-green-1000\">\n <div class=\"text-center text-green-1000\">\n <svg class=\"h-[35px] w-[35px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-green-1000\">\n <div class=\"text-center text-green-1000\">\n <svg class=\"h-[35px] w-[35px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-green-1000\">\n <div class=\"text-center text-green-1000\">\n <svg class=\"h-[35px] w-[35px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-[20px] font-20 text-green-1000\">\n $0*\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-[20px] font-20 text-green-1000\">\n $49\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-[20px] font-20 text-green-1000\">\n 98%\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-[20px] font-20 text-green-1000\">\n 3.14s\n <\/td>\n <\/tr>\n <tr>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap\">\n <div class=\"ml-[4px]\">\n <div class=\"text-[20px] font-weight-bold\">\n Luminati\n <\/div>\n <\/div>\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M6 18L18 6M6 6l12 12\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $0.1\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $500\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 95%\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 5.12s\n <\/td>\n <\/tr>\n <tr>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap\">\n <div class=\"ml-[4px]\">\n <div class=\"text-[20px] font-weight-bold\">\n Netnut\n <\/div>\n <\/div>\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $15\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $300\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 96%\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 5.13s\n <\/td>\n <\/tr>\n <tr>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap\">\n <div class=\"ml-[4px]\">\n <div class=\"text-[20px] font-weight-bold\">\n Proxyscrape (free)\n <\/div>\n <\/div>\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M6 18L18 6M6 6l12 12\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $0\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $0\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 45%\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 13.6s\n <\/td>\n <\/tr>\n <tr>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap\">\n <div class=\"ml-[4px]\">\n <div class=\"text-[20px] font-weight-bold\">\n Freeproxycz (free)\n <\/div>\n <\/div>\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M6 18L18 6M6 6l12 12\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center\">\n\n <div class=\"text-center\">\n <svg class=\"h-[30px] w-[30px]\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" style=\"margin:auto\" fill=\"none\" viewBox=\"0 0 24 24\" stroke=\"currentColor\">\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M5 13l4 4L19 7\"><\/path>\n <\/svg>\n <\/div>\n\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $0\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n $0\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 25%\n <\/td>\n <td class=\"px-[24px] py-[16px] whitespace-nowrap text-center text-md font-20\">\n 12.73s\n <\/td>\n <\/tr>\n\n <\/tbody>\n <\/table>\n <\/div>\n <div class=\"pt-[30px]\">\n <tiny>* request-based pricing, ** benchmarks available <a href=\"https:\/\/www.scrapingbee.com\/blog\/rotating-proxies\/\">here<\/a> and <a href=\"https:\/\/www.scrapingbee.com\/blog\/best-free-proxy-list-web-scraping\/\">here<\/a>, ***60 ips offer<\/tiny>\n <\/div>\n <\/div>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">Great SaaS tool for legitimate scraping and data extraction. <strong>ScrapingBee makes it easy to automatically pull down data from the sites<\/strong> that publish periodic data in a human-readable format.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/andy.jpeg\" alt=\"Andy Hawkes\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Andy Hawkes\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Founder @ <a href=\"https:\/\/loadster.app\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">Loadster<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[70px] md:py-[91px]\">\n <div class=\"container text-center mb-[36px]\">\n <span class=\"block -mb-[4px] uppercase text-black-100\">FEATURES<\/span>\n <h3 class=\"leading-[1.16] mb-[18px] tracking-[0.2px] text-black-100\">Conservative pricing. Radical power.<\/h3>\n <h4 class=\"text-gray-200 tracking-[0.2px]\">Hassle-free web-scraping API.<\/h4>\n <\/div>\n <div class=\"container max-w-[1308px]\">\n <div class=\"flex flex-wrap text-gray-200 text-[16px] leading-[1.50] -my-[19px] -mx-[20px] md:-mx-[36px] pb-[45px]\">\n \n <div class=\"w-full sm:w-1\/2 py-[19px] px-[20px] md:px-[36px]\">\n <div class=\"relative pl-[43px]\">\n <div class=\"absolute left-[0] top-[3px] w-[20px] text-center\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon-earth.svg\" class=\"inline-block\" width=\"20\" alt=\"\">\n <\/div>\n <h4 class=\"mb-[4px] text-black-100\">Smart routing<\/h4>\n <p>To ensure a performance rate of 98% , our smart routing algorithms will always pick the right proxies for your needs.<\/p>"},{"title":"Scrapingdog alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/scrapingdog-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapingdog-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Scrapingdog alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Scrapingdog. Looking for an easier, cheaper, and more reliable scraping solution? There are great alternatives to Scrapingdog out there.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just data\u2014structured, actionable results.<\/h3>\n <p>Scrapingdog does the job, but if you're after flexible, scalable scraping, we offer better solutions that fit your needs.<\/p>"},{"title":"Screenshot API for Developers","link":"https:\/\/www.scrapingbee.com\/features\/screenshot\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/screenshot\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Programmatic Screenshot API for any website with just a simple click of a button call, in seconds.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">Screenshot API for Developers<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Programmatic Screenshot API for any website with just a simple click of a button call, in seconds.<\/p>"},{"title":"Sec Filings Scraper API - Easy Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/sec-filings-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/sec-filings-scraper-api\/","description":{}},{"title":"Seeking Alpha Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/seeking-alpha-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/seeking-alpha-api\/","description":{}},{"title":"Sephora API Scraper - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/sephora-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/sephora-api\/","description":{}},{"title":"SerpApi alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/serpapi-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/serpapi-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">SerpApi alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to SerpApi. Web scraping shouldn&#39;t cost a fortune\u2014or require a full-time engineer to manage.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just search engines. Not just inflated costs.<\/h3>\n <p>SerpAPI is built for one job\u2014<a href=\"https:\/\/www.scrapingbee.com\/blog\/how-to-scrape-google-search-results-data-in-python-easily\/\">scraping search engines<\/a>\u2014but it comes at a steep price. Why pay more for a limited tool when you can scrape the entire web for less?<\/p>"},{"title":"SHEIN Scraper API - Easy Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/shein-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/shein-scraper-api\/","description":{}},{"title":"Shopee Scraper API Tool - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/shopee-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/shopee-api\/","description":{}},{"title":"Shopify Scraper API - Easy Signup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/shopify-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/shopify-scraper-api\/","description":{}},{"title":"Skyscanner Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/skyscanner-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/skyscanner-api\/","description":{}},{"title":"Slickdeals Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/slickdeals-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/slickdeals-api\/","description":{}},{"title":"Smartproxy alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/smartproxy-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/smartproxy-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Smartproxy alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">Pay the fair price for your web scraping needs.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee is helping us scrape <strong>many job boards and company websites<\/strong> without having to deal with proxies or chrome browsers. It drastically simplified our data pipeline<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/russel.jpeg\" alt=\"Russel Taylor\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Russel Taylor\n \n <\/strong>\n \n <span class=\"text-[15px] block\">CEO @ HelloOutbound<\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Fulfill your web scraping needs, at a better price<\/h3>\n <p>Switching from Smartproxy to ScrapingBee could save you some serious money. Especially if you are using their residential proxies.<\/p>"},{"title":"Snapchat Scraper API - Simple Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/snapchat-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/snapchat-scraper-api\/","description":{}},{"title":"Social Media Scraper API - Simple Setup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/social-media-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/social-media-api\/","description":{}},{"title":"SoundCloud Scraper API Tool - Start with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/soundcloud-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/soundcloud-api\/","description":{}},{"title":"Spaw.co alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/spaw-co-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/spaw-co-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Spaw.co alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Spaw.co. When scraping needs to be fast, scalable, and hassle-free, consider alternatives that make web data extraction a breeze.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No limits, no extra charges\u2014just pure scraping power.<\/h3>\n <p>Spaw.co offers scraping but limits access to certain features unless you upgrade. Get the full package with no upsells.<\/p>"},{"title":"Spotify Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/spotify-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/spotify-api\/","description":{}},{"title":"Sreality Scraper API - Get Free Credits Upon Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/sreality-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/sreality-api\/","description":{}},{"title":"Steam Scraper API - Simple Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/steam-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/steam-scraper-api\/","description":{}},{"title":"Stepstone Scraper API - Start Free with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/scrapers\/stepstone-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/stepstone-api\/","description":{}},{"title":"Stockx Scraper API - Simple Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/stockx-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/stockx-scraper-api\/","description":{}},{"title":"StreetEasy Scraper API - Get Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/street-easy-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/street-easy-api\/","description":{}},{"title":"Substack Scraper API - Easy Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/substack-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/substack-scraper-api\/","description":{}},{"title":"Supported Countries","link":"https:\/\/www.scrapingbee.com\/documentation\/country_codes\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/country_codes\/","description":"<h2 id=\"list-of-supported-country-codes\">List of supported country codes<\/h2>\n<p>The following is the list of supported country codes using <a href=\"https:\/\/en.wikipedia.org\/wiki\/ISO_3166-1\" >ISO 3166-1 format<\/a> .<\/p>\n<p>Use country code with the <code>country_code<\/code> parameter. Geolocation is only available when <a href=\"https:\/\/www.scrapingbee.com\/documentation\/#premium-proxy\" >premium proxies<\/a> are enabled: <code>premium_proxy=true<\/code>.<\/p>\n<table><thead><tr><th style=\"text-align:left\">Country Name<\/th><th style=\"text-align:left\">country_code<\/th><\/tr><\/thead><tbody><tr><td style=\"text-align:left\">Afghanistan<\/td><td style=\"text-align:left\">af<\/td><\/tr><tr><td style=\"text-align:left\">Albania<\/td><td style=\"text-align:left\">al<\/td><\/tr><tr><td style=\"text-align:left\">Algeria<\/td><td style=\"text-align:left\">dz<\/td><\/tr><tr><td style=\"text-align:left\">American Samoa<\/td><td style=\"text-align:left\">as<\/td><\/tr><tr><td style=\"text-align:left\">Andorra<\/td><td style=\"text-align:left\">ad<\/td><\/tr><tr><td style=\"text-align:left\">Angola<\/td><td style=\"text-align:left\">ao<\/td><\/tr><tr><td style=\"text-align:left\">Anguilla<\/td><td style=\"text-align:left\">ai<\/td><\/tr><tr><td style=\"text-align:left\">Antarctica<\/td><td style=\"text-align:left\">aq<\/td><\/tr><tr><td style=\"text-align:left\">Antigua &amp; Barbuda<\/td><td style=\"text-align:left\">ag<\/td><\/tr><tr><td style=\"text-align:left\">Argentina<\/td><td style=\"text-align:left\">ar<\/td><\/tr><tr><td style=\"text-align:left\">Armenia<\/td><td style=\"text-align:left\">am<\/td><\/tr><tr><td style=\"text-align:left\">Aruba<\/td><td style=\"text-align:left\">aw<\/td><\/tr><tr><td style=\"text-align:left\">Australia<\/td><td style=\"text-align:left\">au<\/td><\/tr><tr><td style=\"text-align:left\">Austria<\/td><td style=\"text-align:left\">at<\/td><\/tr><tr><td style=\"text-align:left\">Azerbaijan<\/td><td style=\"text-align:left\">az<\/td><\/tr><tr><td style=\"text-align:left\">Bahama<\/td><td style=\"text-align:left\">bs<\/td><\/tr><tr><td style=\"text-align:left\">Bahrain<\/td><td style=\"text-align:left\">bh<\/td><\/tr><tr><td style=\"text-align:left\">Bangladesh<\/td><td style=\"text-align:left\">bd<\/td><\/tr><tr><td style=\"text-align:left\">Barbados<\/td><td style=\"text-align:left\">bb<\/td><\/tr><tr><td style=\"text-align:left\">Belarus<\/td><td style=\"text-align:left\">by<\/td><\/tr><tr><td style=\"text-align:left\">Belgium<\/td><td style=\"text-align:left\">be<\/td><\/tr><tr><td style=\"text-align:left\">Belize<\/td><td style=\"text-align:left\">bz<\/td><\/tr><tr><td style=\"text-align:left\">Benin<\/td><td style=\"text-align:left\">bj<\/td><\/tr><tr><td style=\"text-align:left\">Bermuda<\/td><td style=\"text-align:left\">bm<\/td><\/tr><tr><td style=\"text-align:left\">Bhutan<\/td><td style=\"text-align:left\">bt<\/td><\/tr><tr><td style=\"text-align:left\">Bolivia<\/td><td style=\"text-align:left\">bo<\/td><\/tr><tr><td style=\"text-align:left\">Bosnia and Herzegovina<\/td><td style=\"text-align:left\">ba<\/td><\/tr><tr><td style=\"text-align:left\">Botswana<\/td><td style=\"text-align:left\">bw<\/td><\/tr><tr><td style=\"text-align:left\">Bouvet Island<\/td><td style=\"text-align:left\">bv<\/td><\/tr><tr><td style=\"text-align:left\">Brazil<\/td><td style=\"text-align:left\">br<\/td><\/tr><tr><td style=\"text-align:left\">British Indian Ocean Territory<\/td><td style=\"text-align:left\">io<\/td><\/tr><tr><td style=\"text-align:left\">British Virgin Islands<\/td><td style=\"text-align:left\">vg<\/td><\/tr><tr><td style=\"text-align:left\">Brunei Darussalam<\/td><td style=\"text-align:left\">bn<\/td><\/tr><tr><td style=\"text-align:left\">Bulgaria<\/td><td style=\"text-align:left\">bg<\/td><\/tr><tr><td style=\"text-align:left\">Burkina Faso<\/td><td style=\"text-align:left\">bf<\/td><\/tr><tr><td style=\"text-align:left\">Burma (no longer exists)<\/td><td style=\"text-align:left\">bu<\/td><\/tr><tr><td style=\"text-align:left\">Burundi<\/td><td style=\"text-align:left\">bi<\/td><\/tr><tr><td style=\"text-align:left\">Cambodia<\/td><td style=\"text-align:left\">kh<\/td><\/tr><tr><td style=\"text-align:left\">Cameroon<\/td><td style=\"text-align:left\">cm<\/td><\/tr><tr><td style=\"text-align:left\">Canada<\/td><td style=\"text-align:left\">ca<\/td><\/tr><tr><td style=\"text-align:left\">Cape Verde<\/td><td style=\"text-align:left\">cv<\/td><\/tr><tr><td style=\"text-align:left\">Cayman Islands<\/td><td style=\"text-align:left\">ky<\/td><\/tr><tr><td style=\"text-align:left\">Central African Republic<\/td><td style=\"text-align:left\">cf<\/td><\/tr><tr><td style=\"text-align:left\">Chad<\/td><td style=\"text-align:left\">td<\/td><\/tr><tr><td style=\"text-align:left\">Chile<\/td><td style=\"text-align:left\">cl<\/td><\/tr><tr><td style=\"text-align:left\">China<\/td><td style=\"text-align:left\">cn<\/td><\/tr><tr><td style=\"text-align:left\">Christmas Island<\/td><td style=\"text-align:left\">cx<\/td><\/tr><tr><td style=\"text-align:left\">Cocos (Keeling) Islands<\/td><td style=\"text-align:left\">cc<\/td><\/tr><tr><td style=\"text-align:left\">Colombia<\/td><td style=\"text-align:left\">co<\/td><\/tr><tr><td style=\"text-align:left\">Comoros<\/td><td style=\"text-align:left\">km<\/td><\/tr><tr><td style=\"text-align:left\">Congo<\/td><td style=\"text-align:left\">cg<\/td><\/tr><tr><td style=\"text-align:left\">Cook Iislands<\/td><td style=\"text-align:left\">ck<\/td><\/tr><tr><td style=\"text-align:left\">Costa Rica<\/td><td style=\"text-align:left\">cr<\/td><\/tr><tr><td style=\"text-align:left\">Croatia<\/td><td style=\"text-align:left\">hr<\/td><\/tr><tr><td style=\"text-align:left\">Cuba<\/td><td style=\"text-align:left\">cu<\/td><\/tr><tr><td style=\"text-align:left\">Cyprus<\/td><td style=\"text-align:left\">cy<\/td><\/tr><tr><td style=\"text-align:left\">Czech Republic<\/td><td style=\"text-align:left\">cz<\/td><\/tr><tr><td style=\"text-align:left\">Czechoslovakia (no longer exists)<\/td><td style=\"text-align:left\">cs<\/td><\/tr><tr><td style=\"text-align:left\">C&ocirc;te D'ivoire (Ivory Coast)<\/td><td style=\"text-align:left\">ci<\/td><\/tr><tr><td style=\"text-align:left\">Democratic Yemen (no longer exists)<\/td><td style=\"text-align:left\">yd<\/td><\/tr><tr><td style=\"text-align:left\">Denmark<\/td><td style=\"text-align:left\">dk<\/td><\/tr><tr><td style=\"text-align:left\">Djibouti<\/td><td style=\"text-align:left\">dj<\/td><\/tr><tr><td style=\"text-align:left\">Dominica<\/td><td style=\"text-align:left\">dm<\/td><\/tr><tr><td style=\"text-align:left\">Dominican Republic<\/td><td style=\"text-align:left\">do<\/td><\/tr><tr><td style=\"text-align:left\">East Timor<\/td><td style=\"text-align:left\">tp<\/td><\/tr><tr><td style=\"text-align:left\">Ecuador<\/td><td style=\"text-align:left\">ec<\/td><\/tr><tr><td style=\"text-align:left\">Egypt<\/td><td style=\"text-align:left\">eg<\/td><\/tr><tr><td style=\"text-align:left\">El Salvador<\/td><td style=\"text-align:left\">sv<\/td><\/tr><tr><td style=\"text-align:left\">Equatorial Guinea<\/td><td style=\"text-align:left\">gq<\/td><\/tr><tr><td style=\"text-align:left\">Eritrea<\/td><td style=\"text-align:left\">er<\/td><\/tr><tr><td style=\"text-align:left\">Estonia<\/td><td style=\"text-align:left\">ee<\/td><\/tr><tr><td style=\"text-align:left\">Ethiopia<\/td><td style=\"text-align:left\">et<\/td><\/tr><tr><td style=\"text-align:left\">Falkland Islands (Malvinas)<\/td><td style=\"text-align:left\">fk<\/td><\/tr><tr><td style=\"text-align:left\">Faroe Islands<\/td><td style=\"text-align:left\">fo<\/td><\/tr><tr><td style=\"text-align:left\">Fiji<\/td><td style=\"text-align:left\">fj<\/td><\/tr><tr><td style=\"text-align:left\">Finland<\/td><td style=\"text-align:left\">fi<\/td><\/tr><tr><td style=\"text-align:left\">France<\/td><td style=\"text-align:left\">fr<\/td><\/tr><tr><td style=\"text-align:left\">French Guiana<\/td><td style=\"text-align:left\">gf<\/td><\/tr><tr><td style=\"text-align:left\">French Polynesia<\/td><td style=\"text-align:left\">pf<\/td><\/tr><tr><td style=\"text-align:left\">French Southern Territories<\/td><td style=\"text-align:left\">tf<\/td><\/tr><tr><td style=\"text-align:left\">Gabon<\/td><td style=\"text-align:left\">ga<\/td><\/tr><tr><td style=\"text-align:left\">Gambia<\/td><td style=\"text-align:left\">gm<\/td><\/tr><tr><td style=\"text-align:left\">Georgia<\/td><td style=\"text-align:left\">ge<\/td><\/tr><tr><td style=\"text-align:left\">German Democratic Republic (no longer exists)<\/td><td style=\"text-align:left\">dd<\/td><\/tr><tr><td style=\"text-align:left\">Germany<\/td><td style=\"text-align:left\">de<\/td><\/tr><tr><td style=\"text-align:left\">Ghana<\/td><td style=\"text-align:left\">gh<\/td><\/tr><tr><td style=\"text-align:left\">Gibraltar<\/td><td style=\"text-align:left\">gi<\/td><\/tr><tr><td style=\"text-align:left\">Greece<\/td><td style=\"text-align:left\">gr<\/td><\/tr><tr><td style=\"text-align:left\">Greenland<\/td><td style=\"text-align:left\">gl<\/td><\/tr><tr><td style=\"text-align:left\">Grenada<\/td><td style=\"text-align:left\">gd<\/td><\/tr><tr><td style=\"text-align:left\">Guadeloupe<\/td><td style=\"text-align:left\">gp<\/td><\/tr><tr><td style=\"text-align:left\">Guam<\/td><td style=\"text-align:left\">gu<\/td><\/tr><tr><td style=\"text-align:left\">Guatemala<\/td><td style=\"text-align:left\">gt<\/td><\/tr><tr><td style=\"text-align:left\">Guinea<\/td><td style=\"text-align:left\">gn<\/td><\/tr><tr><td style=\"text-align:left\">Guinea-Bissau<\/td><td style=\"text-align:left\">gw<\/td><\/tr><tr><td style=\"text-align:left\">Guyana<\/td><td style=\"text-align:left\">gy<\/td><\/tr><tr><td style=\"text-align:left\">Haiti<\/td><td style=\"text-align:left\">ht<\/td><\/tr><tr><td style=\"text-align:left\">Heard &amp; McDonald Islands<\/td><td style=\"text-align:left\">hm<\/td><\/tr><tr><td style=\"text-align:left\">Honduras<\/td><td style=\"text-align:left\">hn<\/td><\/tr><tr><td style=\"text-align:left\">Hong Kong<\/td><td style=\"text-align:left\">hk<\/td><\/tr><tr><td style=\"text-align:left\">Hungary<\/td><td style=\"text-align:left\">hu<\/td><\/tr><tr><td style=\"text-align:left\">Iceland<\/td><td style=\"text-align:left\">is<\/td><\/tr><tr><td style=\"text-align:left\">India<\/td><td style=\"text-align:left\">in<\/td><\/tr><tr><td style=\"text-align:left\">Indonesia<\/td><td style=\"text-align:left\">id<\/td><\/tr><tr><td style=\"text-align:left\">Iraq<\/td><td style=\"text-align:left\">iq<\/td><\/tr><tr><td style=\"text-align:left\">Ireland<\/td><td style=\"text-align:left\">ie<\/td><\/tr><tr><td style=\"text-align:left\">Islamic Republic of Iran<\/td><td style=\"text-align:left\">ir<\/td><\/tr><tr><td style=\"text-align:left\">Israel<\/td><td style=\"text-align:left\">il<\/td><\/tr><tr><td style=\"text-align:left\">Italy<\/td><td style=\"text-align:left\">it<\/td><\/tr><tr><td style=\"text-align:left\">Jamaica<\/td><td style=\"text-align:left\">jm<\/td><\/tr><tr><td style=\"text-align:left\">Japan<\/td><td style=\"text-align:left\">jp<\/td><\/tr><tr><td style=\"text-align:left\">Jordan<\/td><td style=\"text-align:left\">jo<\/td><\/tr><tr><td style=\"text-align:left\">Kazakhstan<\/td><td style=\"text-align:left\">kz<\/td><\/tr><tr><td style=\"text-align:left\">Kenya<\/td><td style=\"text-align:left\">ke<\/td><\/tr><tr><td style=\"text-align:left\">Kiribati<\/td><td style=\"text-align:left\">ki<\/td><\/tr><tr><td style=\"text-align:left\">Korea, Democratic People's Republic of<\/td><td style=\"text-align:left\">kp<\/td><\/tr><tr><td style=\"text-align:left\">Korea, Republic of<\/td><td style=\"text-align:left\">kr<\/td><\/tr><tr><td style=\"text-align:left\">Kuwait<\/td><td style=\"text-align:left\">kw<\/td><\/tr><tr><td style=\"text-align:left\">Kyrgyzstan<\/td><td style=\"text-align:left\">kg<\/td><\/tr><tr><td style=\"text-align:left\">Lao People's Democratic Republic<\/td><td style=\"text-align:left\">la<\/td><\/tr><tr><td style=\"text-align:left\">Latvia<\/td><td style=\"text-align:left\">lv<\/td><\/tr><tr><td style=\"text-align:left\">Lebanon<\/td><td style=\"text-align:left\">lb<\/td><\/tr><tr><td style=\"text-align:left\">Lesotho<\/td><td style=\"text-align:left\">ls<\/td><\/tr><tr><td style=\"text-align:left\">Liberia<\/td><td style=\"text-align:left\">lr<\/td><\/tr><tr><td style=\"text-align:left\">Libyan Arab Jamahiriya<\/td><td style=\"text-align:left\">ly<\/td><\/tr><tr><td style=\"text-align:left\">Liechtenstein<\/td><td style=\"text-align:left\">li<\/td><\/tr><tr><td style=\"text-align:left\">Lithuania<\/td><td style=\"text-align:left\">lt<\/td><\/tr><tr><td style=\"text-align:left\">Luxembourg<\/td><td style=\"text-align:left\">lu<\/td><\/tr><tr><td style=\"text-align:left\">Macau<\/td><td style=\"text-align:left\">mo<\/td><\/tr><tr><td style=\"text-align:left\">Madagascar<\/td><td style=\"text-align:left\">mg<\/td><\/tr><tr><td style=\"text-align:left\">Malawi<\/td><td style=\"text-align:left\">mw<\/td><\/tr><tr><td style=\"text-align:left\">Malaysia<\/td><td style=\"text-align:left\">my<\/td><\/tr><tr><td style=\"text-align:left\">Maldives<\/td><td style=\"text-align:left\">mv<\/td><\/tr><tr><td style=\"text-align:left\">Mali<\/td><td style=\"text-align:left\">ml<\/td><\/tr><tr><td style=\"text-align:left\">Malta<\/td><td style=\"text-align:left\">mt<\/td><\/tr><tr><td style=\"text-align:left\">Marshall Islands<\/td><td style=\"text-align:left\">mh<\/td><\/tr><tr><td style=\"text-align:left\">Martinique<\/td><td style=\"text-align:left\">mq<\/td><\/tr><tr><td style=\"text-align:left\">Mauritania<\/td><td style=\"text-align:left\">mr<\/td><\/tr><tr><td style=\"text-align:left\">Mauritius<\/td><td style=\"text-align:left\">mu<\/td><\/tr><tr><td style=\"text-align:left\">Mayotte<\/td><td style=\"text-align:left\">yt<\/td><\/tr><tr><td style=\"text-align:left\">Mexico<\/td><td style=\"text-align:left\">mx<\/td><\/tr><tr><td style=\"text-align:left\">Micronesia<\/td><td style=\"text-align:left\">fm<\/td><\/tr><tr><td style=\"text-align:left\">Moldova, Republic of<\/td><td style=\"text-align:left\">md<\/td><\/tr><tr><td style=\"text-align:left\">Monaco<\/td><td style=\"text-align:left\">mc<\/td><\/tr><tr><td style=\"text-align:left\">Mongolia<\/td><td style=\"text-align:left\">mn<\/td><\/tr><tr><td style=\"text-align:left\">Monserrat<\/td><td style=\"text-align:left\">ms<\/td><\/tr><tr><td style=\"text-align:left\">Morocco<\/td><td style=\"text-align:left\">ma<\/td><\/tr><tr><td style=\"text-align:left\">Mozambique<\/td><td style=\"text-align:left\">mz<\/td><\/tr><tr><td style=\"text-align:left\">Myanmar<\/td><td style=\"text-align:left\">mm<\/td><\/tr><tr><td style=\"text-align:left\">Namibia<\/td><td style=\"text-align:left\">na<\/td><\/tr><tr><td style=\"text-align:left\">Nauru<\/td><td style=\"text-align:left\">nr<\/td><\/tr><tr><td style=\"text-align:left\">Nepal<\/td><td style=\"text-align:left\">np<\/td><\/tr><tr><td style=\"text-align:left\">Netherlands Antilles<\/td><td style=\"text-align:left\">an<\/td><\/tr><tr><td style=\"text-align:left\">Netherlands<\/td><td style=\"text-align:left\">nl<\/td><\/tr><tr><td style=\"text-align:left\">Neutral Zone (no longer exists)<\/td><td style=\"text-align:left\">nt<\/td><\/tr><tr><td style=\"text-align:left\">New Caledonia<\/td><td style=\"text-align:left\">nc<\/td><\/tr><tr><td style=\"text-align:left\">New Zealand<\/td><td style=\"text-align:left\">nz<\/td><\/tr><tr><td style=\"text-align:left\">Nicaragua<\/td><td style=\"text-align:left\">ni<\/td><\/tr><tr><td style=\"text-align:left\">Niger<\/td><td style=\"text-align:left\">ne<\/td><\/tr><tr><td style=\"text-align:left\">Nigeria<\/td><td style=\"text-align:left\">ng<\/td><\/tr><tr><td style=\"text-align:left\">Niue<\/td><td style=\"text-align:left\">nu<\/td><\/tr><tr><td style=\"text-align:left\">Norfolk Island<\/td><td style=\"text-align:left\">nf<\/td><\/tr><tr><td style=\"text-align:left\">Northern Mariana Islands<\/td><td style=\"text-align:left\">mp<\/td><\/tr><tr><td style=\"text-align:left\">Norway<\/td><td style=\"text-align:left\">no<\/td><\/tr><tr><td style=\"text-align:left\">Oman<\/td><td style=\"text-align:left\">om<\/td><\/tr><tr><td style=\"text-align:left\">Pakistan<\/td><td style=\"text-align:left\">pk<\/td><\/tr><tr><td style=\"text-align:left\">Palau<\/td><td style=\"text-align:left\">pw<\/td><\/tr><tr><td style=\"text-align:left\">Panama<\/td><td style=\"text-align:left\">pa<\/td><\/tr><tr><td style=\"text-align:left\">Papua New Guinea<\/td><td style=\"text-align:left\">pg<\/td><\/tr><tr><td style=\"text-align:left\">Paraguay<\/td><td style=\"text-align:left\">py<\/td><\/tr><tr><td style=\"text-align:left\">Peru<\/td><td style=\"text-align:left\">pe<\/td><\/tr><tr><td style=\"text-align:left\">Philippines<\/td><td style=\"text-align:left\">ph<\/td><\/tr><tr><td style=\"text-align:left\">Pitcairn<\/td><td style=\"text-align:left\">pn<\/td><\/tr><tr><td style=\"text-align:left\">Poland<\/td><td style=\"text-align:left\">pl<\/td><\/tr><tr><td style=\"text-align:left\">Portugal<\/td><td style=\"text-align:left\">pt<\/td><\/tr><tr><td style=\"text-align:left\">Puerto Rico<\/td><td style=\"text-align:left\">pr<\/td><\/tr><tr><td style=\"text-align:left\">Qatar<\/td><td style=\"text-align:left\">qa<\/td><\/tr><tr><td style=\"text-align:left\">Romania<\/td><td style=\"text-align:left\">ro<\/td><\/tr><tr><td style=\"text-align:left\">Russian Federation<\/td><td style=\"text-align:left\">ru<\/td><\/tr><tr><td style=\"text-align:left\">Rwanda<\/td><td style=\"text-align:left\">rw<\/td><\/tr><tr><td style=\"text-align:left\">R&eacute;union<\/td><td style=\"text-align:left\">re<\/td><\/tr><tr><td style=\"text-align:left\">Saint Lucia<\/td><td style=\"text-align:left\">lc<\/td><\/tr><tr><td style=\"text-align:left\">Samoa<\/td><td style=\"text-align:left\">ws<\/td><\/tr><tr><td style=\"text-align:left\">San Marino<\/td><td style=\"text-align:left\">sm<\/td><\/tr><tr><td style=\"text-align:left\">Sao Tome &amp; Principe<\/td><td style=\"text-align:left\">st<\/td><\/tr><tr><td style=\"text-align:left\">Saudi Arabia<\/td><td style=\"text-align:left\">sa<\/td><\/tr><tr><td style=\"text-align:left\">Senegal<\/td><td style=\"text-align:left\">sn<\/td><\/tr><tr><td style=\"text-align:left\">Seychelles<\/td><td style=\"text-align:left\">sc<\/td><\/tr><tr><td style=\"text-align:left\">Sierra Leone<\/td><td style=\"text-align:left\">sl<\/td><\/tr><tr><td style=\"text-align:left\">Singapore<\/td><td style=\"text-align:left\">sg<\/td><\/tr><tr><td style=\"text-align:left\">Slovakia<\/td><td style=\"text-align:left\">sk<\/td><\/tr><tr><td style=\"text-align:left\">Slovenia<\/td><td style=\"text-align:left\">si<\/td><\/tr><tr><td style=\"text-align:left\">Solomon Islands<\/td><td style=\"text-align:left\">sb<\/td><\/tr><tr><td style=\"text-align:left\">Somalia<\/td><td style=\"text-align:left\">so<\/td><\/tr><tr><td style=\"text-align:left\">South Africa<\/td><td style=\"text-align:left\">za<\/td><\/tr><tr><td style=\"text-align:left\">South Georgia and the South Sandwich Islands<\/td><td style=\"text-align:left\">gs<\/td><\/tr><tr><td style=\"text-align:left\">Spain<\/td><td style=\"text-align:left\">es<\/td><\/tr><tr><td style=\"text-align:left\">Sri Lanka<\/td><td style=\"text-align:left\">lk<\/td><\/tr><tr><td style=\"text-align:left\">St. Helena<\/td><td style=\"text-align:left\">sh<\/td><\/tr><tr><td style=\"text-align:left\">St. Kitts and Nevis<\/td><td style=\"text-align:left\">kn<\/td><\/tr><tr><td style=\"text-align:left\">St. Pierre &amp; Miquelon<\/td><td style=\"text-align:left\">pm<\/td><\/tr><tr><td style=\"text-align:left\">St. Vincent &amp; the Grenadines<\/td><td style=\"text-align:left\">vc<\/td><\/tr><tr><td style=\"text-align:left\">Sudan<\/td><td style=\"text-align:left\">sd<\/td><\/tr><tr><td style=\"text-align:left\">Suriname<\/td><td style=\"text-align:left\">sr<\/td><\/tr><tr><td style=\"text-align:left\">Svalbard &amp; Jan Mayen Islands<\/td><td style=\"text-align:left\">sj<\/td><\/tr><tr><td style=\"text-align:left\">Swaziland<\/td><td style=\"text-align:left\">sz<\/td><\/tr><tr><td style=\"text-align:left\">Sweden<\/td><td style=\"text-align:left\">se<\/td><\/tr><tr><td style=\"text-align:left\">Switzerland<\/td><td style=\"text-align:left\">ch<\/td><\/tr><tr><td style=\"text-align:left\">Syrian Arab Republic<\/td><td style=\"text-align:left\">sy<\/td><\/tr><tr><td style=\"text-align:left\">Taiwan, Province of China<\/td><td style=\"text-align:left\">tw<\/td><\/tr><tr><td style=\"text-align:left\">Tajikistan<\/td><td style=\"text-align:left\">tj<\/td><\/tr><tr><td style=\"text-align:left\">Tanzania, United Republic of<\/td><td style=\"text-align:left\">tz<\/td><\/tr><tr><td style=\"text-align:left\">Thailand<\/td><td style=\"text-align:left\">th<\/td><\/tr><tr><td style=\"text-align:left\">Togo<\/td><td style=\"text-align:left\">tg<\/td><\/tr><tr><td style=\"text-align:left\">Tokelau<\/td><td style=\"text-align:left\">tk<\/td><\/tr><tr><td style=\"text-align:left\">Tonga<\/td><td style=\"text-align:left\">to<\/td><\/tr><tr><td style=\"text-align:left\">Trinidad &amp; Tobago<\/td><td style=\"text-align:left\">tt<\/td><\/tr><tr><td style=\"text-align:left\">Tunisia<\/td><td style=\"text-align:left\">tn<\/td><\/tr><tr><td style=\"text-align:left\">Turkey<\/td><td style=\"text-align:left\">tr<\/td><\/tr><tr><td style=\"text-align:left\">Turkmenistan<\/td><td style=\"text-align:left\">tm<\/td><\/tr><tr><td style=\"text-align:left\">Turks &amp; Caicos Islands<\/td><td style=\"text-align:left\">tc<\/td><\/tr><tr><td style=\"text-align:left\">Tuvalu<\/td><td style=\"text-align:left\">tv<\/td><\/tr><tr><td style=\"text-align:left\">Uganda<\/td><td style=\"text-align:left\">ug<\/td><\/tr><tr><td style=\"text-align:left\">Ukraine<\/td><td style=\"text-align:left\">ua<\/td><\/tr><tr><td style=\"text-align:left\">Union of Soviet Socialist Republics (no longer exists)<\/td><td style=\"text-align:left\">su<\/td><\/tr><tr><td style=\"text-align:left\">United Arab Emirates<\/td><td style=\"text-align:left\">ae<\/td><\/tr><tr><td style=\"text-align:left\">United Kingdom (Great Britain)<\/td><td style=\"text-align:left\">gb<\/td><\/tr><tr><td style=\"text-align:left\">United States Minor Outlying Islands<\/td><td style=\"text-align:left\">um<\/td><\/tr><tr><td style=\"text-align:left\">United States Virgin Islands<\/td><td style=\"text-align:left\">vi<\/td><\/tr><tr><td style=\"text-align:left\">United States<\/td><td style=\"text-align:left\">us<\/td><\/tr><tr><td style=\"text-align:left\">Uruguay<\/td><td style=\"text-align:left\">uy<\/td><\/tr><tr><td style=\"text-align:left\">Uzbekistan<\/td><td style=\"text-align:left\">uz<\/td><\/tr><tr><td style=\"text-align:left\">Vanuatu<\/td><td style=\"text-align:left\">vu<\/td><\/tr><tr><td style=\"text-align:left\">Vatican City State (Holy See)<\/td><td style=\"text-align:left\">va<\/td><\/tr><tr><td style=\"text-align:left\">Venezuela<\/td><td style=\"text-align:left\">ve<\/td><\/tr><tr><td style=\"text-align:left\">Viet Nam<\/td><td style=\"text-align:left\">vn<\/td><\/tr><tr><td style=\"text-align:left\">Wallis &amp; Futuna Islands<\/td><td style=\"text-align:left\">wf<\/td><\/tr><tr><td style=\"text-align:left\">Western Sahara<\/td><td style=\"text-align:left\">eh<\/td><\/tr><tr><td style=\"text-align:left\">Yemen<\/td><td style=\"text-align:left\">ye<\/td><\/tr><tr><td style=\"text-align:left\">Yugoslavia<\/td><td style=\"text-align:left\">yu<\/td><\/tr><tr><td style=\"text-align:left\">Zaire<\/td><td style=\"text-align:left\">zr<\/td><\/tr><tr><td style=\"text-align:left\">Zambia<\/td><td style=\"text-align:left\">zm<\/td><\/tr><tr><td style=\"text-align:left\">Zimbabwe<\/td><td style=\"text-align:left\">zw<\/td><\/tr><\/tbody><\/table>"},{"title":"Suumo Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/suumo-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/suumo-scraper-api\/","description":{}},{"title":"Talabat Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/talabat-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/talabat-api\/","description":{}},{"title":"Taobao Scraper - Simple Solution, Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/taobao-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/taobao-api\/","description":{}},{"title":"Target Scraper API - Free Credits with Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/target-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/target-api\/","description":{}},{"title":"Temu Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/temu-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/temu-api\/","description":{}},{"title":"Tesco Scraper API - Get Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/tesco-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tesco-api\/","description":{}},{"title":"Thank you for your Submission!","link":"https:\/\/www.scrapingbee.com\/thank_you\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/thank_you\/","description":{}},{"title":"The Best Scraper API to Avoid Getting Blocked","link":"https:\/\/www.scrapingbee.com\/scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scraper-api\/","description":{}},{"title":"The Best Scraper API to Avoid Getting Blocked","link":"https:\/\/www.scrapingbee.com\/web-scraping\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/web-scraping\/","description":{}},{"title":"The easiest way to make the web LLM-readable","link":"https:\/\/www.scrapingbee.com\/features\/markdown-scraper\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/markdown-scraper\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Get Markdown or Plain Text content from any website you want to scrape.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-50 sm:pb-100 md:mb-170\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n <h1 class=\"mb-[14px]\">The easiest way to make the web LLM-readable<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Get Markdown or Plain Text content from any website you want to scrape.<\/p>"},{"title":"The journey to a $1 million ARR SaaS without traditional VCs","link":"https:\/\/www.scrapingbee.com\/journey-to-one-million-arr\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/journey-to-one-million-arr\/","description":"<h2 id=\"the-early-days\">The early days<\/h2>\n<div class=\"journey max-w-[728px] md:max-w-none mx-auto text-[16px]\">\n \n <div class=\"row relative flex flex-wrap items-center text-black-100\">\n <div class=\"col relative w-full sm:w-[100px] md:w-1\/2 flex pr-[10px] md:px-[48px] mb-[20px] sm:mb-[0]\">\n <time class=\"uppercase text-[20px] lg:text-[24px] leading-[1.50] font-bold\">SEP 2006<\/time>\n <\/div>\n <div class=\"col relative w-full sm:w-auto flex-1 md:w-1\/2 md:px-[48px]\">\n <div class=\"bg-yellow-100 p-[20px] rounded-md text-[18px] leading-[1.50]\">\n <div>\n <strong class=\"block text-[24px] mb-[8px]\">14 years ago.<\/strong>\n <p>We (Kevin and Pierre) met in high school in a small town located in the south of France.<\/p>\n \n <img class=\"lozad w-full\" data-src=\"https:\/\/www.scrapingbee.com\/images\/about-us\/castres.jpeg\">\n \n \n <\/div>\n <\/div>\n <\/div>\n <\/div>\n \n <div class=\"row relative flex flex-wrap items-center text-black-100\">\n <div class=\"col relative w-full sm:w-[100px] md:w-1\/2 flex pr-[10px] md:px-[48px] mb-[20px] sm:mb-[0]\">\n <time class=\"uppercase text-[20px] lg:text-[24px] leading-[1.50] font-bold\">JUN 2010<\/time>\n <\/div>\n <div class=\"col relative w-full sm:w-auto flex-1 md:w-1\/2 md:px-[48px]\">\n <div class=\"bg-yellow-100 p-[20px] rounded-md text-[18px] leading-[1.50]\">\n <div>\n <strong class=\"block text-[24px] mb-[8px]\">... school ends.<\/strong>\n <p><p>We go learn CS at university.\nDuring that time, we started to learn about <a href=\"https:\/\/www.ycombinator.com\" >YC<\/a>, <a href=\"https:\/\/www.indiehackers.com\" >IndieHackers<\/a>, Rob Walling's <a href=\"https:\/\/startupbook.net\" >book<\/a>, <a href=\"https:\/\/www.thefamily.co\" >the family<\/a>, and this whole startup\/bootstrapping ecosystem.<\/p>"},{"title":"The Web Scraping API for Busy Developers","link":"https:\/\/www.scrapingbee.com\/what-you-want\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/what-you-want\/","description":{}},{"title":"The Web Scraping API for Busy Developers","link":"https:\/\/www.scrapingbee.com\/web-scraping-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/web-scraping-api\/","description":{}},{"title":"The Web Scraping API for Buzzzy Developers","link":"https:\/\/www.scrapingbee.com\/buzzy-dev\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/buzzy-dev\/","description":{}},{"title":"Thumbtack Scraper API - Get Free Credits with ScrapingBee","link":"https:\/\/www.scrapingbee.com\/scrapers\/thumbtack-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/thumbtack-api\/","description":{}},{"title":"TikTok Email Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-email-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-email-scraper-api\/","description":{}},{"title":"TikTok Follower Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-follower\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-follower\/","description":{}},{"title":"TikTok Scraper - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-api\/","description":{}},{"title":"TikTok Search Scraper API - Simplify Data Extraction with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tiktok-search-api\/","description":{}},{"title":"Tipranks Scraper API - Get Free Credits Upon Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/tipranks-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tipranks-api\/","description":{}},{"title":"Tokopedia Scraper API - Start with Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/tokopedia-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tokopedia-api\/","description":{}},{"title":"Tradingview Scraper API - Free Credits at Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/tradingview-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tradingview-api\/","description":{}},{"title":"Transfermarkt Scraper API - Simple Start with Free Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/transfermarkt-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/transfermarkt-api\/","description":{}},{"title":"Trendyol Scraper API - Free Credits with Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/trendyol-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/trendyol-api\/","description":{}},{"title":"Trulia Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/trulia-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/trulia-api\/","description":{}},{"title":"Tumblr Scraper API - Free Signup Credits & Simplicity","link":"https:\/\/www.scrapingbee.com\/scrapers\/tumblr-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/tumblr-api\/","description":{}},{"title":"Twitch Scraper API - Get Free Credits at Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/twitch-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/twitch-api\/","description":{}},{"title":"Udemy Scraper API - Simple Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/udemy-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/udemy-scraper-api\/","description":{}},{"title":"Unsplash Scraper API - Simple Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/unsplash-scraper-api-key\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/unsplash-scraper-api-key\/","description":{}},{"title":"Upwork Scraper - Quick Integration, Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/upwork-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/upwork-api\/","description":{}},{"title":"Viator Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/viator-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/viator-scraper-api\/","description":{}},{"title":"Vinted Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/vinted-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/vinted-api\/","description":{}},{"title":"Vrbo Scraper API - Free Credits on Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/vrbo-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/vrbo-api\/","description":{}},{"title":"Wall Street Journal Scraper API - Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/wsj-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/wsj-api\/","description":{}},{"title":"Wallapop Scraper API - Get Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/wallapop-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/wallapop-api\/","description":{}},{"title":"Walmart API","link":"https:\/\/www.scrapingbee.com\/documentation\/walmart\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/walmart\/","description":"<p>Our Walmart API allows you to scrape Walmart search results and product details in realtime.<\/p>\n<p>We provide two endpoints:<\/p>\n<ul>\n<li><strong>Search endpoint<\/strong> (<code>\/api\/v1\/walmart\/search<\/code>) - Fetch Walmart search results<\/li>\n<li><strong>Product endpoint<\/strong> (<code>\/api\/v1\/walmart\/product<\/code>) - Fetch structured Walmart product details<\/li>\n<\/ul>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"walmart-search-api\">Walmart Search API<\/h2>\n<h3 id=\"quick-start\">Quick start<\/h3>\n<p>To scrape Walmart search results, you only need two things:<\/p>\n<ul>\n<li>your API key, available <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/manage\/api_key\" >here<\/a><\/li>\n<li>a search query (<a href=\"#query\" >learn more about search query<\/a>)<\/li>\n<\/ul>\n<p>Then, simply do this.<\/p>\n\n\n<div class=\"p-1 rounded mb-6 bg-[#F4F0F0] border border-[#1A1414]\/10 text-[16px] leading-[1.50]\" data-tabs-id=\"d8ce7c99803589be26f305791f7dbd4a\">\n\n <div class=\"md:pl-[30px] xl:pl-[32px] flex items-center justify-end gap-3 py-[10px] px-[17px]\" x-data=\"{ \n open: false, \n selectedLibrary: 'python-d8ce7c99803589be26f305791f7dbd4a',\n libraries: [\n { name: 'Python', value: 'python-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-python.svg', width: 32, height: 32 },\n { name: 'CLI', value: 'cli-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-cli.svg', width: 32, height: 32, isNew: true },\n { name: 'cURL', value: 'curl-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-curl.svg', width: 48, height: 32 },\n { name: 'Go', value: 'go-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-go.svg', width: 32, height: 32 },\n { name: 'Java', value: 'java-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-java.svg', width: 32, height: 32 },\n { name: 'NodeJS', value: 'node-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-node.svg', width: 26, height: 26 },\n { name: 'PHP', value: 'php-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-php.svg', width: 32, height: 32 },\n { name: 'Ruby', value: 'ruby-d8ce7c99803589be26f305791f7dbd4a', icon: '\/images\/icons\/icon-ruby.svg', width: 32, height: 32 }\n ],\n selectLibrary(value, isGlobal = false) {\n this.selectedLibrary = value;\n this.open = false;\n \/\/ Trigger tab switching for this specific instance\n \/\/ Use Alpine's $el to find the container\n const container = $el.closest('[data-tabs-id]');\n if (container) {\n container.querySelectorAll('.nice-tab-content').forEach(tab => {\n tab.classList.remove('active');\n });\n const selectedTab = container.querySelector('#' + value);\n if (selectedTab) {\n selectedTab.classList.add('active');\n }\n }\n \/\/ Individual snippet selectors should NOT trigger global changes\n \/\/ Only the global selector at the top should change all snippets\n },\n getSelectedLibrary() {\n return this.libraries.find(lib => lib.value === this.selectedLibrary) || this.libraries[0];\n },\n init() {\n \/\/ Listen for global language changes\n window.addEventListener('languageChanged', (e) => {\n const globalLang = e.detail.language;\n const matchingLib = this.libraries.find(lib => lib.value.startsWith(globalLang + '-'));\n if (matchingLib) {\n this.selectLibrary(matchingLib.value, true);\n }\n });\n \/\/ Initialize from global state if available\n const globalLang = window.globalSelectedLanguage || 'python';\n const matchingLib = this.libraries.find(lib => lib.value.startsWith(globalLang + '-'));\n if (matchingLib && matchingLib.value !== this.selectedLibrary) {\n this.selectLibrary(matchingLib.value, true);\n }\n }\n }\" x-on:click.away=\"open = false\" x-init=\"init()\">\n <div class=\"relative\">\n \n <button \n @click=\"open = !open\"\n type=\"button\"\n class=\"flex justify-between items-center px-2 py-1.5 bg-white rounded-md border border-[#1A1414]\/10 transition-colors hover:bg-gray-50 focus:outline-none min-w-[180px] shadow-sm\"\n >\n <div class=\"flex gap-2 items-center\">\n <img \n :src=\"getSelectedLibrary().icon\" \n :alt=\"getSelectedLibrary().name\"\n :width=\"20\"\n :height=\"20\"\n class=\"flex-shrink-0 w-5 h-5\"\n \/>\n <span class=\"text-black-100 font-medium text-[14px]\">\n <span x-text=\"getSelectedLibrary().name\"><\/span>\n <span x-show=\"getSelectedLibrary().isNew\" class=\"new-badge ml-1\">New<\/span>\n <\/span>\n <\/div>\n <svg \n class=\"w-3.5 h-3.5 text-gray-400 transition-transform duration-200\" \n :class=\"{ 'rotate-180': open }\"\n fill=\"none\" \n stroke=\"currentColor\" \n viewBox=\"0 0 24 24\"\n >\n <path stroke-linecap=\"round\" stroke-linejoin=\"round\" stroke-width=\"2\" d=\"M19 9l-7 7-7-7\"><\/path>\n <\/svg>\n <\/button>\n \n \n <div \n x-show=\"open\"\n x-transition:enter=\"transition ease-out duration-200\"\n x-transition:enter-start=\"opacity-0 translate-y-1\"\n x-transition:enter-end=\"opacity-100 translate-y-0\"\n x-transition:leave=\"transition ease-in duration-150\"\n x-transition:leave-start=\"opacity-100 translate-y-0\"\n x-transition:leave-end=\"opacity-0 translate-y-1\"\n class=\"overflow-auto absolute left-0 top-full z-50 mt-1 w-full max-h-[300px] bg-white rounded-md border border-[#1A1414]\/10 shadow-lg focus:outline-none\"\n style=\"display: none;\"\n >\n <ul class=\"py-1\">\n <template x-for=\"library in libraries\" :key=\"library.value\">\n <li>\n <button\n @click=\"selectLibrary(library.value)\"\n type=\"button\"\n class=\"flex gap-2 items-center px-2 py-1.5 w-full transition-colors hover:bg-gray-50\"\n :class=\"{ 'bg-yellow-50': selectedLibrary === library.value }\"\n >\n <img \n :src=\"library.icon\" \n :alt=\"library.name\"\n :width=\"20\"\n :height=\"20\"\n class=\"flex-shrink-0 w-5 h-5\"\n \/>\n <span class=\"text-black-100 text-[14px]\" x-text=\"library.name\"><\/span>\n <span x-show=\"library.isNew\" class=\"new-badge ml-1\">New<\/span>\n <span x-show=\"selectedLibrary === library.value\" class=\"ml-auto text-yellow-400\">\n <svg class=\"w-3.5 h-3.5\" fill=\"currentColor\" viewBox=\"0 0 20 20\">\n <path fill-rule=\"evenodd\" d=\"M16.707 5.293a1 1 0 010 1.414l-8 8a1 1 0 01-1.414 0l-4-4a1 1 0 011.414-1.414L8 12.586l7.293-7.293a1 1 0 011.414 0z\" clip-rule=\"evenodd\"><\/path>\n <\/svg>\n <\/span>\n <\/button>\n <\/li>\n <\/template>\n <\/ul>\n <\/div>\n <\/div>\n <div class=\"flex items-center\">\n <span data-seed=\"d8ce7c99803589be26f305791f7dbd4a\" class=\"snippet-copy cursor-pointer flex items-center gap-1.5 px-2.5 py-1.5 text-sm text-black-100 rounded-md border border-[#1A1414]\/10 bg-white hover:bg-gray-50 transition-colors\" title=\"Copy to clipboard!\">\n <span class=\"icon-copy02 leading-none text-[14px]\"><\/span>\n <span class=\"text-[14px]\">Copy<\/span>\n <\/span>\n <\/div>\n <\/div>\n\n <div class=\"bg-[#30302F] rounded-md font-light !font-ibmplex\">\n <div id=\"curl-d8ce7c99803589be26f305791f7dbd4a\"class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-bash\">curl \"https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search?api_key=YOUR-API-KEY&query=iphone\"<\/code><\/pre>\n <\/div>\n <div id=\"python-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content active\">\n <pre><code class=\"language-python\"># Install the Python Requests library:\n# `pip install requests`\nimport requests\n\ndef send_request():\n response = requests.get(\n url=\"https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search\",\n params={\n \"api_key\": \"YOUR-API-KEY\",\n \"query\": \"iphone\",\n },\n\n )\n print('Response HTTP Status Code: ', response.status_code)\n print('Response HTTP Response Body: ', response.content)\nsend_request()<\/code><\/pre>\n <\/div>\n <div id=\"node-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-javascript\">\/\/ request Axios\nconst axios = require('axios');\n\naxios.get('https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search', {\n params: {\n 'api_key': 'YOUR-API-KEY',\n 'query': 'iphone',\n }\n}).then(function (response) {\n \/\/ handle success\n console.log(response);\n})<\/code><\/pre>\n <\/div>\n <div id=\"java-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-java\">import java.io.IOException;\nimport org.apache.http.client.fluent.*;\n\npublic class SendRequest\n{\n public static void main(String[] args) {\n sendRequest();\n }\n\n private static void sendRequest() {\n\n \/\/ Classic (GET )\n\n try {\n\n \/\/ Create request\n Content content = Request.Get(\"https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search?api_key=YOUR-API-KEY&query=iphone\")\n\n\n\n \/\/ Fetch request and return content\n .execute().returnContent();\n\n \/\/ Print content\n System.out.println(content);\n }\n catch (IOException e) { System.out.println(e); }\n }\n}<\/code><\/pre>\n <\/div>\n <div id=\"ruby-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-ruby\">require 'net\/http'\nrequire 'net\/https'\n\n# Classic (GET )\ndef send_request\n uri = URI('https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search?api_key=YOUR-API-KEY&query=iphone')\n\n # Create client\n http = Net::HTTP.new(uri.host, uri.port)\n http.use_ssl = true\n http.verify_mode = OpenSSL::SSL::VERIFY_PEER\n\n # Create Request\n req = Net::HTTP::Get.new(uri)\n\n # Fetch Request\n res = http.request(req)\n puts \"Response HTTP Status Code: #{ res.code }\"\n puts \"Response HTTP Response Body: #{ res.body }\"\nrescue StandardError => e\n puts \"HTTP Request failed (#{ e.message })\"\nend\n\nsend_request()<\/code><\/pre>\n <\/div>\n <div id=\"php-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-php\">&lt;?php\n\n\/\/ get cURL resource\n$ch = curl_init();\n\n\/\/ set url\ncurl_setopt($ch, CURLOPT_URL, 'https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search?api_key=YOUR-API-KEY&query=iphone');\n\n\/\/ set method\ncurl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'GET');\n\n\/\/ return the transfer as a string\ncurl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);\n\n\n\n\/\/ send the request and save response to $response\n$response = curl_exec($ch);\n\n\/\/ stop if fails\nif (!$response) {\n die('Error: \"' . curl_error($ch) . '\" - Code: ' . curl_errno($ch));\n}\n\necho 'HTTP Status Code: ' . curl_getinfo($ch, CURLINFO_HTTP_CODE) . PHP_EOL;\necho 'Response Body: ' . $response . PHP_EOL;\n\n\/\/ close curl resource to free up system resources\ncurl_close($ch);\n\n?&gt;<\/code><\/pre>\n <\/div>\n <div id=\"go-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-go\">package main\n\nimport (\n\t\"fmt\"\n\t\"io\/ioutil\"\n\t\"net\/http\"\n)\n\nfunc sendClassic() {\n\t\/\/ Create client\n\tclient := &http.Client{}\n\n\t\/\/ Create request\n\treq, err := http.NewRequest(\"GET\", \"https:\/\/app.scrapingbee.com\/api\/v1\/walmart\/search?api_key=YOUR-API-KEY&query=iphone\", nil)\n\n\n\tparseFormErr := req.ParseForm()\n\tif parseFormErr != nil {\n\t\tfmt.Println(parseFormErr)\n\t}\n\n\t\/\/ Fetch Request\n\tresp, err := client.Do(req)\n\n\tif err != nil {\n\t\tfmt.Println(\"Failure : \", err)\n\t}\n\n\t\/\/ Read Response Body\n\trespBody, _ := ioutil.ReadAll(resp.Body)\n\n\t\/\/ Display Results\n\tfmt.Println(\"response Status : \", resp.Status)\n\tfmt.Println(\"response Headers : \", resp.Header)\n\tfmt.Println(\"response Body : \", string(respBody))\n}\n\nfunc main() {\n sendClassic()\n}<\/code><\/pre>\n <\/div>\n <div id=\"cli-d8ce7c99803589be26f305791f7dbd4a\" class=\"text-gray-100 text-[12px] leading-[1.54] nice-tab-content\">\n <pre><code class=\"language-bash\"># Install the ScrapingBee CLI:\n# pip install scrapingbee-cli\n\nscrapingbee walmart-search \"iphone\"<\/code><\/pre>\n <\/div>\n <\/div>\n<\/div>\n\n<p>Here is a breakdown of all the parameters you can use with the Walmart Search API:<\/p>"},{"title":"Walmart Scraping API","link":"https:\/\/www.scrapingbee.com\/scrapers\/walmart-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/walmart-scraper-api\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Access Walmart\\u0027s massive product data catalog with our reliable scraping API. Get pricing, descriptions, and product details with a single API call.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"38\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 pb-[50px] sm:pb-[100px] md:mb-[170px]\">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n \n \n<nav aria-label=\"Breadcrumb\" class=\"text-[14px] text-black mb-[20px] flex items-center\">\n <ol class=\"flex items-center\" itemscope itemtype=\"https:\/\/schema.org\/BreadcrumbList\">\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Home<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"1\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <a href=\"https:\/\/www.scrapingbee.com\/scrapers\/\" class=\"text-black no-underline\" itemprop=\"item\">\n <span itemprop=\"name\">Scrapers<\/span>\n <\/a>\n <meta itemprop=\"position\" content=\"2\" \/>\n <\/li>\n <svg width=\"14\" height=\"14\" viewBox=\"0 0 24 24\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" fill=\"none\" class=\"mx-[10px] flex-shrink-0\">\n <path d=\"M9 6L15 12L9 18\" stroke=\"black\" stroke-width=\"2\" stroke-linecap=\"round\" stroke-linejoin=\"round\"\/>\n <\/svg>\n <li itemprop=\"itemListElement\" itemscope itemtype=\"https:\/\/schema.org\/ListItem\">\n <span class=\"font-medium\" itemprop=\"name\">\n Walmart Scraping API\n <\/span>\n <meta itemprop=\"position\" content=\"3\" \/>\n <\/li>\n <\/ol>\n<\/nav>\n\n \n \n <h1 class=\"mb-[14px]\">Walmart Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Access Walmart&#39;s massive product data catalog with our reliable scraping API. Get pricing, descriptions, and product details with a single API call.<\/p>"},{"title":"Walmart Scraping API | Scrape Walmart Search Engine Results","link":"https:\/\/www.scrapingbee.com\/features\/walmart\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/walmart\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Access Walmart\\u0027s massive product data catalog with our reliable scraping API. Get pricing, descriptions, and product details with a single Walmart API call.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"154\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n <h1 class=\"mb-[14px] text-[40px] md:text-[48px] lg:text-[56px] leading-[1.22] font-bold \">Walmart Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Access Walmart&#39;s massive product data catalog with our reliable scraping API. Get pricing, descriptions, and product details with a single Walmart API call.<\/p>"},{"title":"Washington Post Scraper API - Easy Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/washington-post-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/washington-post-scraper-api\/","description":{}},{"title":"Wayback Machine Scraper - Free Signup Credits Available","link":"https:\/\/www.scrapingbee.com\/scrapers\/wayback-machine-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/wayback-machine-api\/","description":{}},{"title":"Wayfair Scraper API - Signup for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/wayfair-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/wayfair-api\/","description":{}},{"title":"Web Scraping Financial Data - Simple Sign Up Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/financial-data-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/financial-data-api\/","description":{}},{"title":"Web Scraping Real Estate Data - Effortless Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/real-estate-data-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/real-estate-data-api\/","description":{}},{"title":"Webmotors Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/webmotors-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/webmotors-api\/","description":{}},{"title":"WebScraper.io alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/webscraper-io-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/webscraper-io-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">WebScraper.io alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to WebScraper.io. Web scraping should be intuitive and affordable. If you&#39;re facing limitations, there are better alternatives to consider.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Scraping should be flexible, not restricted by templates.<\/h3>\n <p>WebScraper.io offers visual scraping, but we give you API-first flexibility that scales better.<\/p>"},{"title":"Website Image Scraper API - Free Credits Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/website-image-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/website-image-api\/","description":{}},{"title":"Whop Scraper API - Easy Start & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/whop-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/whop-scraper-api\/","description":{}},{"title":"Wikipedia Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/wikipedia-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/wikipedia-api\/","description":{}},{"title":"Woocommerce Scraper API - Easy Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/woocommerce-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/woocommerce-scraper-api\/","description":{}},{"title":"Xing Scraper API - Free Signup Credits Available","link":"https:\/\/www.scrapingbee.com\/scrapers\/xing-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/xing-api\/","description":{}},{"title":"Yad2 Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/yad2-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yad2-api\/","description":{}},{"title":"Yahoo Search Scraper API - Free Credits & Simplified Start","link":"https:\/\/www.scrapingbee.com\/scrapers\/yahoo-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yahoo-search-api\/","description":{}},{"title":"Yahoo! Finance Scraper API - Simple Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/yahoo-finance-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yahoo-finance-scraper-api\/","description":{}},{"title":"Yahoo! Images Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/yahoo-images-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yahoo-images-api\/","description":{}},{"title":"Yandex Reverse Image Scraper API - Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/yandex-reverse-image-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yandex-reverse-image-api\/","description":{}},{"title":"Yandex Scraper API - Free Credits with Simple Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/yandex-images-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yandex-images-api\/","description":{}},{"title":"Yandex Search Scraper API - Free Signup & Effortless Integration","link":"https:\/\/www.scrapingbee.com\/scrapers\/yandex-search-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yandex-search-api\/","description":{}},{"title":"Yellow Pages Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/yellow-pages-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/yellow-pages-scraper-api\/","description":{}},{"title":"YouTube API","link":"https:\/\/www.scrapingbee.com\/documentation\/youtube\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/documentation\/youtube\/","description":"<p>Our YouTube API allows you to scrape YouTube search results, video metadata, and transcripts in realtime.<\/p>\n<p>We provide three endpoints:<\/p>\n<ul>\n<li><strong>Search endpoint<\/strong> (<code>\/api\/v1\/youtube\/search<\/code>) - Fetch YouTube search results<\/li>\n<li><strong>Metadata endpoint<\/strong> (<code>\/api\/v1\/youtube\/metadata<\/code>) - Fetch structured YouTube video metadata<\/li>\n<li><strong>Transcript endpoint<\/strong> (<code>\/api\/v1\/youtube\/transcript<\/code>) - Fetch YouTube video transcripts<\/li>\n<\/ul>\n<div class=\"doc-row\">\n<div class=\"doc-full\">\n<h2 id=\"youtube-search-api\">YouTube Search API<\/h2>\n<h3 id=\"quick-start\">Quick start<\/h3>\n<p>To scrape YouTube search results, you only need two things:<\/p>\n<ul>\n<li>your API key, available <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/manage\/api_key\" >here<\/a><\/li>\n<li>a search query (<a href=\"#search\" >learn more about search query<\/a>)<\/li>\n<\/ul>\n<p>Then, simply do this.<\/p>"},{"title":"YouTube Comment Scraper API - Simple Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-comment-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-comment-scraper-api\/","description":{}},{"title":"YouTube Email Scraper API - Easy Start & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-email-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-email-scraper-api\/","description":{}},{"title":"YouTube Metatag Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-metatags\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-metatags\/","description":{}},{"title":"YouTube Music Scraper API - Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-music-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-music-api\/","description":{}},{"title":"YouTube Scraper API | Scrape YouTube Videos & Data","link":"https:\/\/www.scrapingbee.com\/features\/youtube\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/features\/youtube\/","description":"<p><script type=\"application\/ld+json\">\n {\n \"@context\": \"https:\/\/schema.org\",\n \"@type\": \"Product\",\n \"name\": \"ScrapingBee\",\n \"brand\": {\n \"@type\": \"Brand\",\n \"name\": \"ScrapingBee\"\n },\n \"description\": \"Scrape YouTube search results, video metadata, and transcripts in real-time with structured JSON output.\",\n \"aggregateRating\": {\n \"@type\": \"AggregateRating\",\n \"ratingValue\": \"4.9\",\n \"reviewCount\": \"154\",\n \"bestRating\": 5\n }\n }\n<\/script>\n<section class=\"bg-skew-yellow-b pt-[100px] sm:pt-[100px] md:pt-[156px] mb-[120px] relative z-1 \">\n <div class=\"container\">\n <div class=\"flex flex-wrap items-center -mx-[15px]\">\n <div class=\"w-full sm:w-1\/2 px-[15px]\">\n <div class=\"max-w-[542px] leading-[1.77]\">\n \n <h1 class=\"mb-[14px] text-[40px] md:text-[48px] lg:text-[56px] leading-[1.22] font-bold \">YouTube Scraping API<\/h1>\n <p class=\"mb-[36px] text-[20px]\">Scrape YouTube search results, video metadata, and transcripts in real-time with structured JSON output.<\/p>"},{"title":"YouTube Shorts Scraper API - Simple Signup Credits Free","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-shorts-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-shorts-api\/","description":{}},{"title":"YouTube Title Scraper API - Easy Access & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-title-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-title-scraper-api\/","description":{}},{"title":"YouTube Transcript Scraper API - Easy Signup & Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-transcript-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-transcript-scraper-api\/","description":{}},{"title":"YouTube Video Scraper API - Simple Use & Free Signup Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-video-scraper-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/youtube-video-scraper-api\/","description":{}},{"title":"Zalando Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/zalando-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/zalando-api\/","description":{}},{"title":"Zara Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/zara-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/zara-api\/","description":{}},{"title":"ZenRows alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/zenrows-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/zenrows-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">ZenRows alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to ZenRows. Not all scraping APIs are created equal. Here&#39;s how the alternatives compare.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No locked features. No asterisks.<\/h3>\n <p>ZenRows offers solid scraping features, but limits access to key capabilities unless you pay more. We don't do that.<\/p>"},{"title":"Zenscrape alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/zenscrape-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/zenscrape-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Zenscrape alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Zenscrape. Need scraping that\u2019s simple, fast, and cost-effective? Check out alternatives that deliver better value and performance.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">Not just scraping\u2014smart, efficient data extraction.<\/h3>\n <p>Zenscrape offers scraping but limits key features unless you pay extra. We make all features available without the need for upgrades.<\/p>"},{"title":"Zillow Scraper API - Free Signup Credits Provided","link":"https:\/\/www.scrapingbee.com\/scrapers\/zillow-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/zillow-api\/","description":{}},{"title":"Ziprecruiter Scraper API - Get Free Credits Upon Sign Up","link":"https:\/\/www.scrapingbee.com\/scrapers\/ziprecruiter-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/ziprecruiter-api\/","description":{}},{"title":"Zomato Web Scraper - Simplified Signup + Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/zomato-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/zomato-api\/","description":{}},{"title":"ZoomInfo Scraper API - Sign Up for Free Credits","link":"https:\/\/www.scrapingbee.com\/scrapers\/zoominfo-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/zoominfo-api\/","description":{}},{"title":"Zoopla Scraper API - Free Credits on Signup","link":"https:\/\/www.scrapingbee.com\/scrapers\/zoopla-api\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/scrapers\/zoopla-api\/","description":{}},{"title":"Zyte API alternative for web scraping?","link":"https:\/\/www.scrapingbee.com\/zyte-api-alternative\/","pubDate":"Mon, 01 Jan 0001 00:00:00 +0000","guid":"https:\/\/www.scrapingbee.com\/zyte-api-alternative\/","description":"<p><section class=\"bg-yellow-100 py-[100px] md:pt-[220px] md:pb-[20px] mb-[80px] relative z-1\">\n <div class=\"container\">\n <div class=\"max-w-[1024px] mx-auto text-center\">\n <h1 class=\"mb-[14px]\">Zyte API alternative for web scraping?<\/h1>\n <p class=\"mb-[32px]\">ScrapingBee is a better alternative to Zyte API. Your web scraping solution doesn\u2019t have to be overpriced or complicated\u2014there are simpler and more affordable alternatives.<\/p>\n <div class=\"flex flex-wrap items-center justify-center mb-[33px]\">\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/register\" class=\"btn px-[39px] mb-[33px] min-w-[233px] mr-[10px]\">Sign up with email<\/a>\n <script src=\"https:\/\/accounts.google.com\/gsi\/client\" async><\/script>\n <a href=\"https:\/\/dashboard.scrapingbee.com\/account\/google_login\" class=\"btn btn-black-o px-[39px] mb-[33px] min-w-[280px]\">\n <img src=\"https:\/\/www.scrapingbee.com\/images\/icons\/icon_google.svg\" alt=\"Google Logo\" class=\"inline-block mr-[8px] h-[26px]\" \/>\n Sign up with Google\n <\/a>\n <\/div>\n <\/div>\n <div class=\"flex flex-wrap items-center justify-center\">\n <a href=\"https:\/\/www.capterra.com\/p\/195060\/ScrapingBee\/\" target=\"_blank\" class=\"inline-block\"> <img border=\"0\" class=\"h-[40px] mr-[10px] -ml-[9px]\" src=\"https:\/\/brand-assets.capterra.com\/badge\/8898153e-408a-4cdb-9477-bda37032c670.svg\" alt=\"Capterra badge\"\/> <\/a>\n <span class=\"text-[18px] mr-[10px]\">based on 100+ reviews.<\/span>\n <\/div>\n <\/div>\n <\/div>\n<\/section>\n\n<section class=\"pt-[50px] sm:pt-101 pb-[50px] sm:pb-[70px] md:pb-[82px]\">\n <div class=\"container max-w-[894px] w-full flex flex-wrap\">\n\n <blockquote class=\"p-[38px] bg-gray-900 rounded-2xl m-[0] text-black-100 leading-[1.55]\">\n <q class=\"block mb-[35px] text-[24px]\">ScrapingBee <strong>clear documentation, easy-to-use API, and great success rate<\/strong> made it a no-brainer.<\/q>\n <cite class=\"avatar flex items-center not-italic\">\n \n <span class=\"w-[56px] h-[56px] rounded-full overflow-hidden bg-gray-1000 mr-[24px]\">\n <img height=\"56\" width=\"56\" src=\"https:\/\/www.scrapingbee.com\/images\/testimonials\/dominic.jpeg\" alt=\"Dominic Phillips\">\n <\/span>\n \n <span>\n <strong class=\"text-[18px] font-bold block mb-[4px]\">Dominic Phillips\n \n <\/strong>\n \n <span class=\"text-[15px] block\">Co-Founder @ <a href=\"https:\/\/codesubmit.io\" class=\"font-bold underline hover:no-underline\" target=\"_blank\">CodeSubmit<\/a><\/span>\n \n <\/span>\n <\/cite>\n <\/blockquote>\n <\/div>\n<\/section>\n\n<section class=\"py-[50px] sm:py-[60px] md:py-[80px] text-gray-200 text-[16px] leading-[1.50]\">\n <div class=\"container max-w-[1292px]\">\n <div class=\"flex flex-wrap -mx-[23px]\">\n <div class=\"w-full md:w-[500px] lg:w-[608px] px-[23px]\">\n <div class=\"pr-[20px]\">\n <div class=\"mb-[48px]\">\n <h3 class=\"text-black-100 mb-[8px]\">No complexity, no unnecessary costs\u2014just efficient web scraping.<\/h3>\n <p>Zyte API provides a lot of power but comes with added complexity and high costs. We provide all the same features with simpler, more affordable plans.<\/p>"}]}}