If you are blocking access for certain IPs but want the Site Search 360 crawler to have access to your site, please whitelist the following IP addresses in your firewall: with the robots meta tag) or only applied for your internal search results, in which case you need to specify them under your Site Search 360 Settings. These rules can be set for all search engines (e.g. The crawler always checks if there're any rules preventing link discovery. You can also blacklist unwanted URL patterns. Turn the "Crawl Subdomains" setting OFF under Website Crawling if you'd like to exclude pages from your subdomains. For example, we would pick up the URLs from if your start URL is. The crawler does NOT go to external websites including Facebook, Twitter, LinkedIn, etc., but we do crawl your subdomains by default. With Sitemap Indexing, make sure the missing URL is included in your sitemap. You can also try re-indexing it and see if it triggers any errors. Tip: If you notice that some search results are missing, the first thing to check is whether the missing URL is indexed. Your Index log allows you to look up any URL and check if it's indexed. Every project is referenced by its unique ID displayed under Account and is essential if you integrate the search manually. Indexing means adding the pages that were discovered by the crawler to a search index that is unique for every Site Search 360 project. If we cannot detect a valid sitemap for your domain, we automatically switch on the Website Crawling method: the Site Search 360 crawler visits your root URL(s) - typically the homepage - and follows outgoing links that point to other pages within your site or within the domains you've specified as the starting points for the crawler. Note: The sitemap XML file should be formatted correctly for the crawler to process it. This means that, if we can detect a valid sitemap XML file for the domain you've provided at the registration, the Site Search 360 crawler will go to that sitemap - typically found at or - to pick up your website URLs listed there. Sitemap Indexing is a preferred indexing method. The exact crawler behavior can be configured with "Website Crawling" or "Sitemap Indexing", or both at the same time. A search index is a list of pages and documents that are shown as search results in response to a query that your site visitor types into a search box on your site. What is indexing and how does the Site Search 360 crawler work?Ī crawler, or spider, is a type of bot that browses your website and builds a search index that enables your search.
#Quick way to delete a populated account in moneyspire update#
We also update your index automatically, the re-crawl/reindexing frequency depends on your plan. Whenever you edit these settings, you need to re-index all configured sources for the changes to be applied. Indexing rules are directly responsible for the quality of your search results and can be set under the Data Import section.