tag page

Norconex released version 2.7.0 of both its HTTP Collector and Filesystem Collector.  This update, along with related component updates, introduces several interesting features.

HTTP Collector changes

The following items are specific to the HTTP Collector.  For changes applying to both the HTTP Collector and the Filesystem Collector, you can proceed to the “Generic changes” section.

Crawling of JavaScript-driven pages

[ezcol_1half]

The alternative document fetcher PhantomJSDocumentFetcher now makes it possible to crawl web pages with JavaScript-generated content. This much awaited feature is now available thanks to integration with the open-source PhantomJS headless browser.   As a bonus, you can also take screenshots of web pages you crawl.

[/ezcol_1half]

[ezcol_1half_end]

<documentFetcher 
    class="com.norconex.collector.http.fetch.impl.PhantomJSDocumentFetcher">
  <exePath>/path/to/phantomjs.exe</exePath>
  <renderWaitTime>5000</renderWaitTime>
  <referencePattern>^.*\.html$</referencePattern> 
</documentFetcher>

[/ezcol_1half_end]

More ways to extract links

[ezcol_1half]

This release introduces two new link extractors.  You can now use the XMLFeedLinkExtractor to extract links from RSS or Atom feeds. For maximum flexibility, the RegexLinkExtractor can be used to extract links using regular expressions.

[/ezcol_1half]

[ezcol_1half_end]

<extractor class="com.norconex.collector.http.url.impl.RegexLinkExtractor">
  <linkExtractionPatterns>
    <pattern group="1">\[(http.*?)\]</pattern>
  </linkExtractionPatterns>
</extractor>
<extractor class="com.norconex.collector.http.url.impl.XMLFeedLinkExtractor">
  <applyToReferencePattern>.*rss$</applyToReferencePattern>
</extractor>

[/ezcol_1half_end]

Generic changes

The following changes apply to both Filesystem and HTTP Collectors. Most of these changes come from an update to the Norconex Importer module (now also at version 2.7.0).

Much improved XML configuration validation

[ezcol_1half]

You no longer have to hunt for a misconfiguration.  Schema-based XML configuration validation was added and you will now get errors if you have a bad XML syntax for any configuration options.   This validation can be trigged on command prompt with this new flag: -k or --checkcfg.

[/ezcol_1half]

[ezcol_1half_end]

# -k can be used on its own, but when combined with -a (like below),
# it will prevent the collector from executing if there are any errors.

collector-http.sh -a start -c examples/minimum/minimum-config.xml -k

# Error sample:
ERROR (XML) ReplaceTagger: cvc-attribute.3: The value 'asdf' of attribute 'regex' on element 'replace' is not valid with respect to its type, 'boolean'.

[/ezcol_1half_end]

Enter durations in human-readable format

[ezcol_1half]

Having to convert a duration in milliseconds is not the most friendly. Anywhere in your XML configuration where a duration is expected, you can now use a human-readable representation (English only) as an alternative.

[/ezcol_1half]

[ezcol_1half_end]

<!-- Example using "5 seconds" and "1 second" as opposed to milliseconds -->
<delay class="com.norconex.collector.http.delay.impl.GenericDelayResolver"
    default="5 seconds" ignoreRobotsCrawlDelay="true" scope="site" >
  <schedule dayOfWeek="from Saturday to Sunday">1 second</schedule>
</delay>

[/ezcol_1half_end]

Lua scripting language

[ezcol_1half]

Support for Lua scripting has been added to ScriptFilter, ScriptTagger, and ScriptTransformer.  This gives you one more scripting option available out-of-the-box besides JavaScript/ECMAScript.

[/ezcol_1half]

[ezcol_1half_end]

<!-- Add "apple" to a "fruit" metadata field: -->
<tagger class="com.norconex.importer.handler.tagger.impl.ScriptTagger"
    engineName="lua">
  <script><![CDATA[
    metadata:addString('fruit', {'apple'});
  ]]></script>
</tagger>

[/ezcol_1half_end]

Modify documents using an external application

[ezcol_1half]

With the new ExternalTransformer, you can now use an external application to perform document transformation.  This is an alternative to the existing ExternalParser, which was enhanced to provide the same environment variables and metadata extraction support as the ExternalTransformer.

[/ezcol_1half]

[ezcol_1half_end]

<transformer class="com.norconex.importer.handler.transformer.impl.ExternalTransformer">
  <command>/path/transform/app ${INPUT} ${OUTPUT}</command>
  <metadata>
    <match field="docnumber">DocNo:(\d+)</match>
  </metadata>
</transformer>

[/ezcol_1half_end]

Combine document fields

[ezcol_1half]

The new MergeTagger can be used for combining multiple fields into one. The target field can be either multi-value or single-value separated with the character of your choice.

[/ezcol_1half]

[ezcol_1half_end]

<tagger class="com.norconex.importer.handler.tagger.impl.MergeTagger">
  <merge toField="title" deleteFromFields="true" 
      singleValue="true" singleValueSeparator=",">
    <fromFields>title,dc.title,dc:title,doctitle</fromFields>
  </merge>
</tagger>

[/ezcol_1half_end]

New Committers

[ezcol_1half]

Whether you do not have a target repository (Solr, Elasticsearch, etc) ready at the time of crawling, or whether you are not using a repository at all, Norconex Collectors now ships with two file-based Committers for easy consumption by your own process: XMLFileCommitter and JSONFileCommitter. All available committers can be found here.

[/ezcol_1half]

[ezcol_1half_end]

<committer class="com.norconex.committer.core.impl.XMLFileCommitter">
 <directory>/path/my-xmls/</directory>
 <pretty>true</pretty>
 <docsPerFile>100</docsPerFile>
 <compress>false</compress>
 <splitAddDelete>false</splitAddDelete>
</committer>

[/ezcol_1half_end]

More

Several additional features or changes can be found in the latest Collector releases.  Among them:

  • New Importer RegexReferenceFilter for filtering documents based on matching references (e.g. URL).
  • New SubstringTransformer for truncating content.
  • New UUIDTagger for giving a unique id to each documents.
  • CharacterCaseTagger now supports “swap” and “string” to swap character case and capitalize beginning of a string, respectively.
  • ConstantTagger offers options when dealing with existing values: add to existing values, replace them, or do nothing.
  • Components such as Importer, Committers, etc., are all easier to install thanks to new utility scripts.
  • Document Access-Control-List (ACL) information is now extracted from SMB/CIFS file systems (Filesytem Collector).
  • New ICollectorLifeCycleListener interface that can be added on the collector configuration to be notified and take action when the collector starts and stops.
  • Added “removeTrailingHash” as a new GenericURLNormalizer option (HTTP Collector).
  • New “detectContentType” and “detectCharset” options on GenericDocumentFetcher for ignoring the content type and character encoding obtained from the HTTP response headers and detect them instead (Filesytem Collector).
  • Start URLs and start paths can now be dynamically created thanks to IStartURLsProvider and IStartPathsProvider (HTTP Collector and Filesystem Collector).

To get the complete list of changes, refer to the HTTP Collector release notes, Filesystem Collector release notes, or the release notes of dependent Norconex libraries such as: Importer release notes and Collector Core release notes.

Download

Looking for InformationThere are many business applications where web crawling can be of benefit. You or your team likely have ongoing research projects or smaller projects that come up from time to time. You may do a lot of manual web searching (think Google) looking for random information, but what if you need to do targeted reviews to pull specific data from numerous websites? A manual web search can be time consuming and prone to human error, and some important information could be overlooked. An application powered by a custom crawler can be an invaluable tool to save the manpower required to extract relevant content. This can allow you more time to actually review and analyze the data, putting it to work for your business.

A web crawler can be set up to locate and gather complete or partial content from public websites, and the information can be provided to you in an easily manageable format. The data can be stored in a search engine or database, integrated with an in-house system or tailored to any other target. There are multiple ways to access the data you gathered. It can be as simple as receiving a scheduled e-mail message with a .csv file or setting up search pages or a web app. You can also add functionality to sort the content, such as pulling data from a specific timeframe, by certain keywords or whatever you need.
If you have developers in house and want to build your own solution, you don’t even have to start from scratch. There are many tools available to get you started, such as our free crawler:  Norconex HTTP Collector

If you hire a company to build your web crawler, you will want to use a reputable company that will respect all website terms of use. The solution can be set up and then “handed over” to your organization for you to run on an ongoing basis. For a hosted solution, the crawler and any associated applications will be set up and managed for you. This means any changes to your needs like adding/removing what sites to monitor or changing the parameters of what information you want to extract can be managed and supported as needed with minimal effort by your team.

Here are some examples of how businesses might use web crawling:

MONITORING THE NEWS AND SOCIAL MEDIA

What is being said about your organization in the media? Do you review industry forums? Are there comments posted on external sites by your customers that you might not even be aware of to which your team should be responding? A web crawler can monitor news sites, social media sites (Facebook, LinkedIn, Twitter, etc.), industry forums and others to get information on what is being said about you and your competitors. This kind of information could be invaluable to your marketing team to keep a pulse on your company image through sentiment analysis. This can help you know more about your customers’ perceptions and how you are comparing against your competition.

COMPETITIVE INFORMATION

Are people on your sales, marketing or product management teams tasked with going online to find out what new products or services are being provided by your competitors? Are you searching the competition to review pricing to make sure you are priced competitively in your space? What about comparing how your competitors are promoting their products to customers? A web crawler can be set up to grab that information, and then it can be provided to you so you can concentrate on analyzing that data rather than finding it. If you’re not currently monitoring your competition in this way, maybe you should be.

LEAD GENERATION

Does your business rely on information from other websites to help you generate a portion of your revenues? If you had better, faster access to that information, what additional revenues might that influence? An example is companies that specialize in staffing and job placement. When they know which companies are hiring, it provides them with an opportunity to reach out to those companies and help them fill those positions. They may wish to crawl the websites of key or target accounts, public job sites, job groups on LinkedIn and Facebook or forums on sites like Quora or Freelance to find all new job postings or details about companies looking for help with various business requirements. Capturing all those leads and returning them in a useable format can help generate more business.

TARGET LISTS

A crawler can be set up to do entity extraction from websites. Say, for example, an automobile association needs to reach out to all car dealerships and manufacturers to promote services or industry events. A crawler can be set up to crawl target websites that provide relevant company listings to pull things like addresses, contact names and phone numbers (if available), and that content can be provided in a single, usable repository.

POSTING ALERTS

Do you have partners whose websites you need to monitor for information in order to grow your business? Think of the real estate or rental agent who is constantly scouring the MLS (Multiple Listing Service) and other realtor listing sites to find that perfect home or commercial property for a client they are serving. A web crawler can be set up to extract and send all new listings matching their requirements from multiple sites directly to their inbox as soon as they are posted to give them a leg up on their competition.

SUPPLIER PRICING AND AVAILABILITY

If you are purchasing product from various suppliers, you are likely going back and forth between their sites to compare offerings, pricing and availability. Being able to compare this information without going from website to website could save your business a lot of time and ensure you don’t miss out on the best deals!

These are just some of the many examples of how web crawling can be of benefit. The number of business cases where web crawlers can be applied are endless. What are yours?

 

Useful links

 

HTTP Collector 2.6

Norconex has released version 2.6.0 of its HTTP Collector web crawler! Among new features, an upgrade of its Importer module brings new document parsing and manipulating capabilities. Some of the changes highlighted here also benefit the Norconex Filesystem Collector.

New URL normalization to remove trailing slashes

[ezcol_1half]

The GenericURLNormalizer has a new pre-defined normalization rule: “removeTrailingSlash”. When used, it makes sure to remove forward slash (/) found at the end of URLs so such URLs are treated the same as those not ending with such character. As an example:

  • https://norconex.com/ will become https://norconex.com
  • https://norconex.com/blah/ will become https://norconex.com/blah

It can be used with the 20 other normalization rules offered, and you can still provide your own.

[/ezcol_1half]

[ezcol_1half_end]

<urlNormalizer class="com.norconex.collector.http.url.impl.GenericURLNormalizer">
  <normalizations>
    removeFragment, lowerCaseSchemeHost, upperCaseEscapeSequence,
    decodeUnreservedCharacters, removeDefaultPort,
    encodeNonURICharacters, removeTrailingSlash
  </normalizations>
</urlNormalizer>

[/ezcol_1half_end]

Prevent sitemap detection attempts

[ezcol_1half]

By default StandardSitemapResolverFactory is enabled and tries to detect whether a sitemap file exists at the “/sitemap.xml” or “/sitemap_index.xml” URL path. For websites without sitemaps files at these location, this creates unnecessary HTTP request failures. It is now possible to specify an empty “path” so that such discovery does not take place. In such case, it will rely on sitemap URLs explicitly provided as “start URLs” or sitemaps defined in “robots.txt” files.

[/ezcol_1half]

[ezcol_1half_end]

<sitemapResolverFactory>
  <path/>
</sitemapResolverFactory>

[/ezcol_1half_end]

Count occurrences of matching text

[ezcol_1half]

Thanks to the new CountMatchesTagger, it is now possible to count the number of times any piece of text or regular expression occurs in a document content or one of its fields. A sample use case may be to use the obtained count as a relevancy factor in search engines. For instance, one may use this new feature to find out how many segments are found in a document URL, giving less importance to documents with many segments.

[/ezcol_1half]

[ezcol_1half_end]

<tagger class="com.norconex.importer.handler.tagger.impl.CountMatchesTagger"> 
  <countMatches 
      fromField="document.reference"
      toField="urlSegmentCount" 
      regex="true">
    /[^/]+
  </countMatches>
</tagger>

[/ezcol_1half_end]

Multiple date formats

[ezcol_1half]

DateFormatTagger now accepts multiple source formats when attempting to convert dates from one format to another. This is particularly useful when the date formats found in documents or web pages are not consistent. Some products, such as Apache Solr, usually expect dates to be of a specific format only.

[/ezcol_1half]

[ezcol_1half_end]

<tagger class="com.norconex.importer.handler.tagger.impl.DateFormatTagger"
    fromField="Last-Modified"
    toField="solr_date"
    toFormat="yyyy-MM-dd'T'HH:mm:ss.SSS'Z'">
  <fromFormat>EEE, dd MMM yyyy HH:mm:ss zzz</fromFormat>
  <fromFormat>EPOCH</fromFormat>
</tagger>

[/ezcol_1half_end]

DOM enhancements

[ezcol_1half]

DOM-related features just got better. First, the DOMTagger, which allows one to extract values from an XML/HTML document using a DOM-like structurenow supports an optional “fromField” to read the markup content from a field instead of the document content. It also supports a new “defaultValue” attribute to store a value of your choice when there are no matches with your DOM selector. In addition, now both DOMContentFilter and DOMTagger supports many more selector extraction options: ownText, data, id, tagName, val, className, cssSelector, and attr(attributeKey).

[/ezcol_1half]

[ezcol_1half_end]

<tagger class="com.norconex.importer.handler.tagger.impl.DOMTagger">
  <dom selector="div.contact" toField="htmlContacts" extract="html" />
</tagger>
<tagger class="com.norconex.importer.handler.tagger.impl.DOMTagger"
    fromField="htmlContacts">
  <dom selector="div.firstName" toField="firstNames" 
       extract="ownText" defaultValue="NO_FIRST_NAME" />
  <dom selector="div.lastName"  toField="lastNames" 
       extract="ownText" defaultValue="NO_LAST_NAME" />
</tagger>

[/ezcol_1half_end]

More control of embedded documents parsing

[ezcol_1half]

GenericDocumentParserFactory now allows you to control which embedded documents you do not want extracted from their containing document (e.g., do not extract embedded images). In addition, it also allows you to control which containing document you do not want to extract their embedded documents (e.g., do not extract documents embedded in MS Office documents). Finally, it also allows you now to specify which content types to “split” their embedded documents into separate files (as if they were standalone documents), via regular expression (e.g. documents contained in a zip file).

[/ezcol_1half]

[ezcol_1half_end]

<documentParserFactory class="com.norconex.importer.parser.GenericDocumentParserFactory">
  <embedded>
    <splitContentTypes>application/zip</splitContentTypes>
    <noExtractEmbeddedContentTypes>image/.*</noExtractEmbeddedContentTypes>
    <noExtractContainerContentTypes>
      application/(msword|vnd\.ms-.*|vnd\.openxmlformats-officedocument\..*)
    </noExtractContainerContentTypes>
  </embedded>
</documentParserFactory>

[/ezcol_1half_end]

Document parsers now XML configurable

[ezcol_1half]

GenericDocumentParserFactory now makes it possible to overwrite one or more parsers the Importer module uses by default via regular XML configuration. For any content type, you can specify your custom parser, including an external parser.

[/ezcol_1half]

[ezcol_1half_end]

<documentParserFactory class="com.norconex.importer.parser.GenericDocumentParserFactory">
  <parsers>
    <parser contentType="text/html" 
        class="com.example.MyCustomHTMLParser" />
    <parser contentType="application/pdf" 
        class="com.norconex.importer.parser.impl.ExternalParser">
      <command>java -jar c:\Apps\pdfbox-app-2.0.2.jar ExtractText ${INPUT} ${OUTPUT}</command>
    </parser>
  </parsers>
</documentParserFactory>

[/ezcol_1half_end]

More languages detected

[ezcol_1half]

LanguageTagger now uses Tika language detection, which supports at least 70 languages.

[/ezcol_1half]

[ezcol_1half_end]

<tagger class="com.norconex.importer.handler.tagger.impl.LanguageTagger">
  <languages>en, fr</languages>
</tagger>

[/ezcol_1half_end]

What else?

Other changes and stability improvements were made to this release. A few examples:

  • New “checkcfg” launch action that helps detect configuration issues before an actual launch.
  • Can now specify “notFoundStatusCodes” on GenericMetadataFetcher.
  • GenericLinkExtractor no longer extracts URL from HTML/XML comments by default.
  • URL referrer data is now always preserved by default.

To get the complete list of changes, refer to the HTTP Collector release notes, or the release notes of dependent Norconex libraries such as: Importer release notes and Collector Core release notes.

Useful links

HTTP Collector 2.5

Norconex has released Norconex HTTP Collector version 2.5.0! This new version of our open source web crawler was released to help minimize your re-crawling frequencies and download delays, and it allows you to specify a locale for date parsing/formatting. The following highlights these key changes and additions:

Minimum re-crawl frequency

[ezcol_1half]

Not all web pages and documents are updated as regularly. In addition, updates are not as important to capture right away for all types of content. Re-crawling every page every time to find out if they changed or not can be time consuming (and sometimes taxing) on larger sites. For instance, you may want to re-crawl news pages more regularly than other types of pages on a given site. Luckily, some websites provide sitemaps which give crawlers pointers to its document update frequencies.

This release introduces “recrawlable resolvers” to help control the frequency of document re-crawls. You can now specify a minimum re-crawl delay, based on a document matching content type or reference pattern. The default implementation is GenericRecrawlableResolver, which supports sitemap “lastmod” and “changefreq” in addition to custom re-crawl frequencies.

[/ezcol_1half]

[ezcol_1half_end]

<recrawlableResolver
    class="com.norconex.collector.http.recrawl.impl.GenericRecrawlableResolver"
    sitemapSupport="last" >
  <minFrequency applyTo="contentType" value="monthly">application/pdf</minFrequency>
  <minFrequency applyTo="reference" value="1800000">.*latest-news.*\.html</minFrequency>
</recrawlableResolver>

[/ezcol_1half_end]

Download delays based on document URL

[ezcol_1half]

ReferenceDelayResolver is a new “delay resolver” that controls delays between each document download. It allows you to define different delays for different URL patterns. This can be useful for more fragile websites negatively impacted by the fast download of several big documents (e.g., PDFs). In such cases, introducing a delay between certain types of download can help keep the crawled website performance intact.

[/ezcol_1half]

[ezcol_1half_end]

<delay class="com.norconex.collector.http.delay.impl.ReferenceDelayResolver"
    default="2000"
    ignoreRobotsCrawlDelay="true"
    scope="crawler" >
  <pattern delay="10000">.*\.pdf$</pattern>
</delay>

[/ezcol_1half_end]

Specify a locale in date parsing/formatting

[ezcol_1half]

Thanks to the Norconex Importer 2.5.2 dependency update, it is now possible to specify a locale when parsing/formatting dates with CurrentDateTagger and DateFormatTagger.

[/ezcol_1half]

[ezcol_1half_end]

<tagger class="com.norconex.importer.handler.tagger.impl.DateFormatTagger"
    fromField="date"
    fromFormat="EEE, dd MMM yyyy HH:mm:ss 'GMT'"
    fromLocale="fr"
    toFormat="yyyy-MM-dd'T'HH:mm:ss'Z'"
    keepBadDates="false"
    overwrite="true" />

[/ezcol_1half_end]

 

Useful links

  • Download Norconex HTTP Collector
  • Get started with Norconex HTTP Collector
  • Report your issues and questions on Github
  • Norconex HTTP Collector Release Notes

 

Norconex HTTP Collector 2.3.0

Norconex is proud to release version 2.3.0 of its Norconex HTTP Collector open-source web crawler.  Thanks to incredible community feedback and efforts, we have implemented several feature requests, and your favorite crawler is now more stable than ever. The following describes only a handful of these new features with a focus on XML configuration. Refer to the product release notes for a complete list of changes.

Restrict crawling to a specific site

[ezcol_1half]

Up until now, you could restrict crawling to a specific domain, protocol, and port using one or more reference filters (e.g., RegexReferenceFilter). Norconex HTTP Collector 2.3.0 features new configuration options to “stay on a site”, called stayOnProtocol, stayOnDomain, and stayOnPort.  These new settings can be applied to the <startURLs> tag of your XML configuration.  They are particularly useful when you have many “start URLs” defined and you do not want to create many reference filters to stay on those sites.

[/ezcol_1half]

[ezcol_1half_end]

<startURLs stayOnDomain="true" stayOnPort="true" stayOnProtocol="true">
  <url>http://mysite.com</url>
</startURLs>

[/ezcol_1half_end]

 

Add HTTP request headers

[ezcol_1half]

GenericHttpClientFactory now allows you to set HTTP request headers on every HTTP calls that a crawler will make. This new feature can save the day for sites expecting certain header values to be present to render properly. For instance, some sites may rely on the “Accept-Language” request header to decide which language to pick to render a page.

[/ezcol_1half]

[ezcol_1half_end]

<httpClientFactory>
  <headers>
    <header name="Accept-Language">fr</header>
    <header name="From">john@smith.com</header>
  </headers>
</httpClientFactory>

[/ezcol_1half_end]

Specify a sitemap as a start URL

[ezcol_1half]

It is now possible to specify one or more sitemap URLs as “start URLs.”  This is in addition to the crawler attempting to detect sitemaps at standard locations. To only use the sitemap URL provided as a start URL, you can disable the sitemap discovery process by adding ignore="true" to <sitemapResolverFactory> as shown in the code sample.  To only crawl pages listed in sitemap files and not further follow links found in those pages, remember to set the <maxDepth> to zero.

[/ezcol_1half]

[ezcol_1half_end]

<startURLs>
  <sitemap>http://mysite.com/sitemap.xml</sitemap>
</startURLs>
<sitemapResolverFactory ignore="true" />

[/ezcol_1half_end]

Basic URL normalization always performed

[ezcol_1half]

URL normalization is now in effect by default using GenericURLNormalizer. The following are the default normalization rules applied:

  • Removing the URL fragment (the “#” character and everything after)
  • Converting the scheme and host to lower case
  • Capitalizing letters in escape sequences
  • Decoding percent-encoded unreserved characters
  • Removing the default port
  • Encoding non-URI characters

You can always overwrite the default normalization settings or turn off normalization altogether by adding the disabled="true" attribute to the <urlNormalizer> tag.

[/ezcol_1half]

[ezcol_1half_end]

<urlNormalizer>
  <normalizations>
    lowerCaseSchemeHost, upperCaseEscapeSequence, removeDefaultPort, 
    removeDotSegments, removeDirectoryIndex, removeFragment, addWWW 
  </normalizations>
  <replacements>
    <replace><match>&amp;view=print</match></replace>
    <replace>
       <match>(&amp;type=)(summary)</match>
       <replacement>$1full</replacement>
    </replace>
  </replacements>
</urlNormalizer>

[/ezcol_1half_end]

Scripting Language and DOM navigation

We introduced additional features when we upgraded the Norconex Importer dependency to its latest version (2.4.0). You can now use scripting languages to insert your own document processing logic or reference DOM elements of a XML or HTML file using a friendly syntax.  Refer to the Importer 2.4.0 release announcement for more details.

Useful links

There is so much more offered by this release. Use the following links to find out more about Norconex HTTP Collector.

The latest release of Norconex HTTP Collector provides more content transformation capabilities, canonical URL support, increased stability, and more additional features.  

Norconex HTTP Collector 2.2 now availableAs the Internet grows, so does the demand for better ways to extract and process web data. Several commercial and open-source/free web crawling solutions have been available for years now. Unfortunately, most are limited by one or more of the following:

  • Feature set is too limited
  • Unfriendly and complex to setup
  • Poorly documented
  • Require strong programming skills
  • No longer supported or active
  • Integrates with a single search engine or repository
  • Geared solely on big data solutions (like the popular Apache Nutch has become)
  • Difficult to extend with your own features
  • High cost of ownership

Norconex is changing this with its full-featured, enterprise-class, open-source web crawler solution. Norconex HTTP Collector is entirely configurable using simple XML, yet offers many extension points for adventurous Java programmers. It integrates with virtually any repository or search engine (Solr, Elasticsearch, IDOL, GSA, etc.). You will find it is thoroughly documented in a single location, with sample configurations files working out of the box on any operating system.

The latest release builds upon the great community requests and feedback to provide the following highlights:

Canonical Links Detector

[ezcol_1half]

Canonical links are a way for the webmaster to help crawlers avoid duplicates by indicating the preferred URL for accessing a web page. The HTTP Collector now detects canonical links found in both HTML and HTTP headers.

The GenericCanonicalLinkDetector looks within the HTML <head> tags for a <link> tag following this pattern:

<link rel="canonical" href="https://norconex.com/sample" />

It also looks for an HTTP response header field named “Link” with a value following this pattern:

<https://norconex.com/sample.pdf> rel="canonical"

The advantage for webmasters in defining canonical URLs in the HTTP response header over an HTML page is twofold. First, it allows web crawlers to reject non-canonical pages before they are downloaded (saving bandwidth). Second, they can apply to any content types, not just HTML pages.

[/ezcol_1half]

[ezcol_1half_end]

<canonicalLinkDetector
    class="com.norconex.collector.http.url.impl.GenericCanonicalLinkDetector"
    ignore="false">
</canonicalLinkDetector>

[/ezcol_1half_end]

URL Reports Creation

[ezcol_1half]

URLStatusCrawlerEventListener is a new crawler event listener that can produce spreadsheet-friendly reports on fetched URLs and their statuses. Among other things, it can be useful for finding broken links on a site being crawled.

[/ezcol_1half]

[ezcol_1half_end]

<listener
    class="com.norconex.collector.http.crawler.event.impl.URLStatusCrawlerEventListener">
  <statusCodes>404</statusCodes>
  <outputDir>/a/path/broken-links.tsv</outputDir>
</listener>

[/ezcol_1half_end]

Spoiled State Resolver

[ezcol_1half]

A new class called GenericSpoiledReferenceStrategizer allows you to specify how to handle URLs that were once valid, but turned “bad” on a subsequent crawl. You can chose to delete them from your repository, give them a single chance to recover on the next crawl, or simply ignore them.

[/ezcol_1half]

[ezcol_1half_end]

<spoiledReferenceStrategizer 
    class="com.norconex.collector.core.spoil.impl.GenericSpoiledReferenceStrategizer"
    fallbackStrategy="IGNORE">
  <mapping state="NOT_FOUND" strategy="DELETE" />
  <mapping state="BAD_STATUS" strategy="GRACE_ONCE" />
  <mapping state="ERROR" strategy="IGNORE" />
</spoiledReferenceStrategizer>

[/ezcol_1half_end]

Extra Filtering and Data Manipulation Options

Norconex HTTP Collector internally relies on the Norconex Importer library for parsing documents and manipulating text and metadata. The latest release of the Importer brings you several new options, such as:

  • CurrentDateTagger: Add the current date to a document.
  • DateMetadataFilter: Accepts or rejects a document based on the date value of a metadata field.
  • NumericMetadataFilter: Accepts or rejects a document based on the numeric value of a metadata field.
  • TextPatternTagger: Extracts and adds all text values matching the regular expression provided to a metadata field.

Want to crawl a filesystem instead?

Whether you are interested in crawling a local drive, a network drive, a FTP site, webav, or any other types of filesystems, Norconex Filesystem Collector is for you; it was recently upgraded to version 2.2.0 as well. Check its release notes for details.

Useful Links

Optical character recognition (ORC), content translation, title generation, detection and text extraction from more file formats, are among the new features now part of your favorite crawlers: Norconex HTTP Collector 2.1.0 and Norconex Filesystem Collector 2.1.0. They are both available now and can be downloaded for free.  They both ship with and use the latest version of the Norconex Importer module, which is in big part responsible for many of these new features.

For more details and usage examples, check this article.

These two Collector releases also include bug fixes and stability improvements.  We recommend to existing users to upgrade.

Get your copy

Download Norconex HTTP Collector

Download Norconex Filesystem Collector

Broken linkThis tutorial will show you how to extend Norconex HTTP Collector using Java to create a link checker to ensure all URLs in your web pages are valid. The link checker will crawl your target site(s) and create a report file of bad URLs. It can be used with any existing HTTP Collector configuration (i.e., crawl a website to extract its content while simultaneously reporting on its broken links).  If you are not familiar with Norconex HTTP Collector already, you can refer to our Getting Started guide.

The link checker we will create will record:

  • URLs that were not found (404 HTTP status code)
  • URLs that generated other invalid HTTP status codes
  • URLs that generated an error from the HTTP Collector

The links will be stored in a tab-delimited-format, where the first row holds the column headers. The columns will be:

  • Referrer: the page containing the bad URL
  • Bad URL: the culprit
  • Cause: one of “Not Found,” “Bad Status,” or “Crawler Error”

One of the goals of this tutorial is to hopefully show you how easy it is to add your own code to the Norconex HTTP Collector. You can download the files used to create this tutorial at the bottom of this page. You can jump right there if you are already familiar with Norconex HTTP Collector. Otherwise, keep reading for more information.

Get your workspace setup

To perform this tutorial in your own environment, you have two main choices. If you are a seasoned Java developer and an Apache Maven enthusiast, you can create a new Maven project including Norconex HTTP Collector as a dependency. You can find the dependency information at the bottom of its download page.

If you want a simpler option, first download the latest version of Norconex HTTP Collector and unzip the file to a location of your choice. Then create a Java project in your favorite IDE.   At this point, you will need to add to your project classpath all Jar files found in the “lib” folder under your install location. To avoid copying compiled files manually every time you change them, you can change the compile output directory of your project to be the “classes” folder found under your install location. That way, the collector will automatically detect your compiled code when you start it.

You are now ready to code your link checker.

Listen to crawler events

There are several interfaces offered by the Norconex HTTP Collector that we could implement to achieve the functionality we seek. One of the easiest approaches in this case is probably to listen for crawler events. The collector provides an interface for this called ICrawlerEventListener. You can have any number of event listeners for your crawler, but we only need to create one. We can implement this interface with our link checking logic:

package com.norconex.blog.linkchecker;

public class LinkCheckerCrawlerEventListener 
        implements ICrawlerEventListener, IXMLConfigurable {

    private String outputFile;

    @Override
    public void crawlerEvent(ICrawler crawler, CrawlerEvent event) {
        String type = event.getEventType();
        
        // Create new file on crawler start
        if (CrawlerEvent.CRAWLER_STARTED.equals(type)) {
            writeLine("Referrer", "Bad URL", "Cause", false);
            return;
        }

        // Only keep if a bad URL
        String cause = null;
        if (CrawlerEvent.REJECTED_NOTFOUND.equals(type)) {
            cause = "Not found";
        } else if (CrawlerEvent.REJECTED_BAD_STATUS.equals(type)) {
            cause = "Bad status";
        } else if (CrawlerEvent.REJECTED_ERROR.equals(type)) {
            cause = "Crawler error";
        } else {
            return;
        }

        // Write bad URL to file
        HttpCrawlData httpData = (HttpCrawlData) event.getCrawlData();
        writeLine(httpData.getReferrerReference(), 
                httpData.getReference(), cause, true);
    }

    private void writeLine(
            String referrer, String badURL, String cause, boolean append) {
        try (FileWriter out = new FileWriter(outputFile, append)) {
            out.write(referrer);
            out.write('\t');
            out.write(badURL);
            out.write('\t');
            out.write(cause);
            out.write('\n');
        } catch (IOException e) {
            throw new CollectorException("Cannot write bad link to file.", e);
        }
    }

    // More code exists: download source files
}

As you can see, the previous code focuses only on the crawler events we are interested in and stores URL information associated with these events. We do not have to worry about other aspects of web crawling in that implementation. The above code is all the Java we need to write for our link checker.

Configure your crawler

If you have not seen a Norconex HTTP Collector configuration file before, you can find sample ones for download, along with all options available, on the product configuration page.

This is how we reference the link checker we created:

<crawlerListeners>
  <listener class="com.norconex.blog.linkchecker.LinkCheckerCrawlerEventListener">
    <outputFile>${workdir}/badlinks.tsv</outputFile>
  </listener>
</crawlerListeners>

By default, the Norconex HTTP Collector does not keep track of referring pages with every URL it extracts (to minimize information storage and increase performance). Because having a broken URL without knowing which page holds it is not very useful, we want to keep these referring pages. Luckily, this is just a flag to enable on an existing class:

<linkExtractors>
  <extractor class="com.norconex.collector.http.url.impl.HtmlLinkExtractor"
     keepReferrerData="true" />
</linkExtractors>

In addition to these configuration settings, you will want to apply more options, such as restricting your link checker scope to only your site or a specific sub-section or your site. Use the configuration file sample at the bottom of this page as your starting point and modify it according to your needs.

You are ready

Once you have your configuration file ready and the compiled Link Checker listener in place, you can give it a try (replace .bat with .sh on *nix platforms):

collector-http.bat -a start -c path/to/your/config.xml

The bad link report file will be written at the location you specified above.

Source files

Download Download the source files used to create this article

 

Despite all the “noise” on social media sites, we can’t deny how valuable information found on social media networks can be for some organizations. Somewhat less obvious is how to harvest that information for your own use. You can find many posts online asking about the best ways to crawl this or that social media service: Shall I write a custom web scraper? Should I purchase a product to do so?

This article will show you how to crawl Facebook posts (more…)

Norconex HTTP Collector 1.3Release 1.3 of Norconex HTTP Collector is now available.  Among new features added to our open-source web crawler, you can expect the following:

  • Now supports NTLM authentication. Experimental support added for SPNEGO and Kerberos.
  • Document checksums are added to each document metadata.
  • Refactoring of HTTPClient creation with many new configuration options added (connection timeout, charset, maximum redirects, and several more).
  • Can optionally trust all SSL certificate now.
  • Integrates new features of Norconex Importer 1.2.0 such as support for WordPerfect document parsing, new filter and transformers, etc.
  • Integrates new features of Norconex Committer 1.2.0 such as defining multiple committers, retrying upon commit failure, etc.
  • Other third-party library upgrades.

Download it now!