tag page

This feature release of Norconex Importer brings bug fixes, enhancements, and great new features, such as OCR and translation support.  Keep reading for all the details on some of this release’s most interesting changes. While Java can be used to configure and use the Importer, XML configuration is used here for demonstration purposes.  You can find all Importer configuration options here.

About Norconex Importer

Norconex Importer is an open-source product for extracting and manipulating text and metadata from files of various formats.  It works for stand-alone use or as a Java library.  It’s an essential component of Norconex Collectors for processing crawled documents.  You can make Norconex Importer an essential piece of your ETL pipeline.

OCR support

[ezcol_1half]

Norconex Importer now leverages Apache Tika 1.7’s newly introduced ORC capability. To convert popular image formats (PNG, TIFF, JPEG, etc.) to text, download a copy of Tesseract OCR for your operating system, and reference its install location in your Importer configuration.  When enabled, OCR will process embedded images too (e.g., PDF with image for text). The class configure to enable OCR support is GenericDocumentParserFactory.

[/ezcol_1half]

[ezcol_1half_end]

<documentParserFactory 
    class="com.norconex.importer.parser.GenericDocumentParserFactory" >
  <ocr path="(path to Tesseract OCR software install)">
    <languages>eng,fra</languages>
  </ocr>
</documentParserFactory>

 [/ezcol_1half_end]

Translation support

[ezcol_1half]

With the new TranslatorSplitter class, it’s now possible to hook Norconex Importer with a translation API.  The Apache Tika API has been extended to provide the ability to translate a mix of document content or specific document fields.  The translation APIs supported out-of-the-box are Microsoft, Google, Lingo24, and Moses.

[/ezcol_1half]

[ezcol_1half_end]

<postParseHandlers>
  <spitter
      class="com.norconex.importer.handler.splitter.impl.TranslatorSplitter"
      api="microsoft">
    <clientId>YOUR_CLIENT_ID</clientId>
    <secretId>YOUR_SECRET_ID</secretId>
  </spitter>
</postParseHandlers>

 [/ezcol_1half_end]

Dynamic title creation

[ezcol_1half]

Too many documents do not have a valid title, when they have a title at all.  What if you need a title to represent each document?  What do you do in such cases?   Do you take the file name as the title? Not so nice.  Do you take the document property called “title”?  Not reliable.  You now have a new option with the TitleGeneratorTagger.  It will try to detect a decent title out of your document.  In cases where it can’t, it offers a few alternate options. You always get something back.

[/ezcol_1half]

[ezcol_1half_end]

<postParseHandlers>
  <tagger class="com.norconex.importer.handler.tagger.impl.TitleGeneratorTagger"
          toField="generated_title"
          fallbackMaxLength="250"
          detectHeading="true"
          detectHeadingMinLength="10"
          detectHeadingMaxLength="500" />
</postParseHandlers>

 [/ezcol_1half_end]

Saving of parsing errors

[ezcol_1half]

A new top-level configuration option was introduced so that every file generating parsing errors gets saved in a location of your choice.  These files will be saved along with the metadata obtained so far (if any), along with the Java exception that was thrown. This is a great addition to help troubleshoot parsing failures.

[/ezcol_1half]

[ezcol_1half_end]

<importer>
  <parseErrorsSaveDir>/path/to/store/bad/files</parseErrorsSaveDir>
</importer>

 [/ezcol_1half_end]

Document parsing improvements

The content type detection accuracy and performance were improved with this release.  In addition, document parsing features the following additions and improvements:

  • Better PDF support with addition of PDF XFA (dynamic forms) text extraction, as well as improved space detection (eliminating many space-stripping issues).  Also, PDFs with JBIG2 and jpeg2000 image formats are now parsed properly.
  • New XFDL parser (PureEdge Extensible Forms Description Language).  Supports both Gzipped/Base64 encoded and plain text versions.
  • New, much improved WordPerfect parser now parsing WordPerfect documents according to WordPerfect file specifications.
  • New Quattro Pro parser for parsing Quattro Pro documents according to Quattro Pro file specifications.
  • JBIG2 and jpeg2000 image formats are now recognized.

You want more?

The list of changes and improvements doesn’t stop here.  Read the product release notes for a complete list of changes.

Unfamiliar with this product? No sweat — read this “Getting Started” page.

If not already out when you read this, the next feature release of Norconex HTTP Collector and Norconex Filesystem Collector will both ship with this version of the Importer.  Can’t wait for the release? Manually upgrade the Norconex Importer library to take advantage of these new features in your favorite crawler.

Download Norconex Importer 2.1.0.

Release 1.6.0 of Norconex Commons Lang provides new Java utility classes and enhancements to existing ones:

New Classes

TimeIdGenerator

[ezcol_1half]

Use TimeIdGenerator when you need to generate numeric IDs that are unique within a JVM. It generates Java long values that are guaranteed to be in order (but can have gaps).  Can generate up to 1 million unique IDs per milliseconds. Read Javadoc.

[/ezcol_1half]

[ezcol_1half_end]

long id = 0;

id = TimeIdGenerator.next();
System.out.println(id); // prints 1427256596604000000

id = TimeIdGenerator.last();
System.out.println(id); // prints 1427256596604000000

id = TimeIdGenerator.next();
System.out.println(id); // prints 1427256596604000001

[/ezcol_1half_end]

TextReader

[ezcol_1half]

A new class for reading large text, one chunk at a time, based on a specified maximum read size. When a text is too large, it tries to split it wisely at each paragraphs, sentences, or words (whichever one is possible). Read Javadoc.

[/ezcol_1half]

[ezcol_1half_end]

// Process maximum 500KB at a time
TextReader reader = new TextReader(originalReader, 500 * 1024);
String textChunk = null;
while ((textChunk = reader.readText()) != null) {
    // do something with textChunk
}
reader.close();

[/ezcol_1half_end]

ByteArrayOutputStream

[ezcol_1half]

An alternate version of Java and Apache Commons ByteArrayOutputStream. Like the Apache version, this version is faster than Java ByteArrayOutputStream. In addition, it provides additional methods for obtaining a subset of bytes ranging from zero to the total number of bytes written so far. Read Javadoc.

[/ezcol_1half]

[ezcol_1half_end]

ByteArrayOutputStream out = new ByteArrayOutputStream();
out.write("ABCDE".getBytes());        
out.write("FGHIJKLMNOPQRSTUVWXYZ".getBytes());        

byte[] b = new byte[10];
out.getBytes(b, 0);
System.out.println(new String(b)); // prints ABCDEFGHIJ
System.out.println((char) out.getByte(15)); // prints P

[/ezcol_1half_end]

Enhancements

IOUtil enhancements

The following utility methods were added to the IOUtil class:

Other improvements

Get your copy

Download Norconex Commons Lang 1.6.0.

You can also view the release notes for a complete list of changes.

 

Broken linkThis tutorial will show you how to extend Norconex HTTP Collector using Java to create a link checker to ensure all URLs in your web pages are valid. The link checker will crawl your target site(s) and create a report file of bad URLs. It can be used with any existing HTTP Collector configuration (i.e., crawl a website to extract its content while simultaneously reporting on its broken links).  If you are not familiar with Norconex HTTP Collector already, you can refer to our Getting Started guide.

The link checker we will create will record:

  • URLs that were not found (404 HTTP status code)
  • URLs that generated other invalid HTTP status codes
  • URLs that generated an error from the HTTP Collector

The links will be stored in a tab-delimited-format, where the first row holds the column headers. The columns will be:

  • Referrer: the page containing the bad URL
  • Bad URL: the culprit
  • Cause: one of “Not Found,” “Bad Status,” or “Crawler Error”

One of the goals of this tutorial is to hopefully show you how easy it is to add your own code to the Norconex HTTP Collector. You can download the files used to create this tutorial at the bottom of this page. You can jump right there if you are already familiar with Norconex HTTP Collector. Otherwise, keep reading for more information.

Get your workspace setup

To perform this tutorial in your own environment, you have two main choices. If you are a seasoned Java developer and an Apache Maven enthusiast, you can create a new Maven project including Norconex HTTP Collector as a dependency. You can find the dependency information at the bottom of its download page.

If you want a simpler option, first download the latest version of Norconex HTTP Collector and unzip the file to a location of your choice. Then create a Java project in your favorite IDE.   At this point, you will need to add to your project classpath all Jar files found in the “lib” folder under your install location. To avoid copying compiled files manually every time you change them, you can change the compile output directory of your project to be the “classes” folder found under your install location. That way, the collector will automatically detect your compiled code when you start it.

You are now ready to code your link checker.

Listen to crawler events

There are several interfaces offered by the Norconex HTTP Collector that we could implement to achieve the functionality we seek. One of the easiest approaches in this case is probably to listen for crawler events. The collector provides an interface for this called ICrawlerEventListener. You can have any number of event listeners for your crawler, but we only need to create one. We can implement this interface with our link checking logic:

package com.norconex.blog.linkchecker;

public class LinkCheckerCrawlerEventListener 
        implements ICrawlerEventListener, IXMLConfigurable {

    private String outputFile;

    @Override
    public void crawlerEvent(ICrawler crawler, CrawlerEvent event) {
        String type = event.getEventType();
        
        // Create new file on crawler start
        if (CrawlerEvent.CRAWLER_STARTED.equals(type)) {
            writeLine("Referrer", "Bad URL", "Cause", false);
            return;
        }

        // Only keep if a bad URL
        String cause = null;
        if (CrawlerEvent.REJECTED_NOTFOUND.equals(type)) {
            cause = "Not found";
        } else if (CrawlerEvent.REJECTED_BAD_STATUS.equals(type)) {
            cause = "Bad status";
        } else if (CrawlerEvent.REJECTED_ERROR.equals(type)) {
            cause = "Crawler error";
        } else {
            return;
        }

        // Write bad URL to file
        HttpCrawlData httpData = (HttpCrawlData) event.getCrawlData();
        writeLine(httpData.getReferrerReference(), 
                httpData.getReference(), cause, true);
    }

    private void writeLine(
            String referrer, String badURL, String cause, boolean append) {
        try (FileWriter out = new FileWriter(outputFile, append)) {
            out.write(referrer);
            out.write('\t');
            out.write(badURL);
            out.write('\t');
            out.write(cause);
            out.write('\n');
        } catch (IOException e) {
            throw new CollectorException("Cannot write bad link to file.", e);
        }
    }

    // More code exists: download source files
}

As you can see, the previous code focuses only on the crawler events we are interested in and stores URL information associated with these events. We do not have to worry about other aspects of web crawling in that implementation. The above code is all the Java we need to write for our link checker.

Configure your crawler

If you have not seen a Norconex HTTP Collector configuration file before, you can find sample ones for download, along with all options available, on the product configuration page.

This is how we reference the link checker we created:

<crawlerListeners>
  <listener class="com.norconex.blog.linkchecker.LinkCheckerCrawlerEventListener">
    <outputFile>${workdir}/badlinks.tsv</outputFile>
  </listener>
</crawlerListeners>

By default, the Norconex HTTP Collector does not keep track of referring pages with every URL it extracts (to minimize information storage and increase performance). Because having a broken URL without knowing which page holds it is not very useful, we want to keep these referring pages. Luckily, this is just a flag to enable on an existing class:

<linkExtractors>
  <extractor class="com.norconex.collector.http.url.impl.HtmlLinkExtractor"
     keepReferrerData="true" />
</linkExtractors>

In addition to these configuration settings, you will want to apply more options, such as restricting your link checker scope to only your site or a specific sub-section or your site. Use the configuration file sample at the bottom of this page as your starting point and modify it according to your needs.

You are ready

Once you have your configuration file ready and the compiled Link Checker listener in place, you can give it a try (replace .bat with .sh on *nix platforms):

collector-http.bat -a start -c path/to/your/config.xml

The bad link report file will be written at the location you specified above.

Source files

Download Download the source files used to create this article

 

Despite all the “noise” on social media sites, we can’t deny how valuable information found on social media networks can be for some organizations. Somewhat less obvious is how to harvest that information for your own use. You can find many posts online asking about the best ways to crawl this or that social media service: Shall I write a custom web scraper? Should I purchase a product to do so?

This article will show you how to crawl Facebook posts (more…)

This feature release brings the following additions…

Simple Pipeline

Useful if you want to quickly assemble multiple tasks to be run into a single “pipeline” while keeping it ultra simple.  The following example does it all in a single class only to keep it short.

public class MyPipeline extends Pipeline<String> {

    public MyPipeline() {
        addStage(new MyTask1());
        addStage(new MyTask2());
    }
    
    // Class: Task1
    private class MyTask1 implements IPipelineStage<String> {
        @Override
        public boolean execute(String context) {
            System.out.println("Task 1 executed: " + context);
            return true;
        }
    }  

    // Class: Task2
    private class MyTask2 implements IPipelineStage<String> {
        @Override
        public boolean execute(String context) {
            System.out.println("Task 2 executed: " + context);
            return true;
        }
    }  
    
    public static void main(String[] args) {
        new MyPipeline().execute("hello");
        
        // Will print out:
        //     Task 1 executed: hello
        //     Task 2 executed: hello
    }
}

 Cacheable Streams

There are several excellent object caching mechanism available to Java already if you need something sophisticated.   This release offers a very lightweight cache implementation that can make InputStream and OutputStream reusable.  It stores the stream in memory until a configurable threshold is reached, after which it switches to fast file lookup.   A CachedStreamFactory is used to obtain cached streams sharing the same pool of memory.

        int size10mb = 10 * 1024 * 1024;
        int size1mb  = 1024 * 1024;
        InputStream is = null; // <-- your original input stream
        OutputStream os = null; // <-- your original output stream
        
        CachedStreamFactory streamFactory = new CachedStreamFactory(size10mb, size1mb);
        
        //--- Reuse the input stream ---
        CachedInputStream cachedInput = streamFactory.newInputStream(is);
        
        // Read the input stream the first time
        System.out.println(IOUtils.toString(cachedInput));
        // Read the input stream a second time
        System.out.println(IOUtils.toString(cachedInput));
        
        // Released the cached data, preventing further re-use
        cachedInput.dispose();

        //--- Reuse the output stream ---
        CachedOutputStream cachedOutput = streamFactory.newOuputStream(os);
        
        IOUtils.write("lots of data", cachedOutput);
        
        // Obtain a new input stream from the output
        CachedInputStream newInputStream = cachedOutput.getInputStream();
        
        // Do what you want with this input stream

 Enhanced XML Writing

The Java XMLStreamWriter is a useful class, but is a bit annoying to use when you are not always writing strings.   The EnhancedXMLStreamWriter add convenience method for primary types and others.

        Writer out = null; // <-- your target writer
        
        EnhancedXMLStreamWriter xml = new EnhancedXMLStreamWriter(out);
        xml.writeStartDocument();
        xml.writeStartElement("item");
        
        xml.writeElementInteger("quantity", 23);
        
        xml.writeElementString("name", "something");
        
        xml.writeStartElement("size");
        xml.writeAttributeInteger("height", 24);
        xml.writeAttributeInteger("width", 26);
        xml.writeEndElement();

        xml.writeElementBoolean("sealwrapped", true);

        xml.writeEndElement();
        xml.writeEndDocument();
        
        /* Will write:
          
          <?xml version="1.0" encoding="UTF-8"?>
          <item>
              <quantity>23</quantity>
              <name>something</name>
              <size height="24" width="26" />
              <sealwrapped>true</sealwrapped>
          </item>
         */

More Equality checks

More methods were added to EqualUtils:

        EqualsUtil.equalsAnyIgnoreCase("toMatch", "candidate1", "candiate1");
        EqualsUtil.equalsAllIgnoreCase("toMatch", "candidate1", "candiate1");
        EqualsUtil.equalsNoneIgnoreCase("toMatch", "candidate1", "candiate1");

Discover More Features

A few more features and updates were made to the Norconex Commons Lang library.   For more information, check out the full release notes.

Download your copy now.

 

 

During the development of our latest product, Norconex Content Analytics, we decided to add facets to the search interface. They allow for exploring the indexed content easily. Solr and Elasticsearch both have facet implementations that work on top of Lucene. But Lucene also offers simple facet implementations that can be picked out of the box. And because Norconex Content Analytics is based on Lucene, we decided to go with those implementations.

We’ll look at those facet implementations in this blog post, but before, let’s talk about a new feature of Lucene 4 that is used by all of them.

(more…)

Norconex Commons Lang 1.4.0 was just released.

New features:

  • New DataUnit classe to perform data unit (KB, MB, GB, etc) conversions much like Java TimeUnit class.
  • New DataUnitFormatter to format any data unit ot a human-readable format taking into account locale and decimals
  • New percentage formatter.
  • New ContentType class to represent a file media/MIME type and obtain its usual name, content family, and file extension(s).
  • New ContentFamily class to represent a group of files of similar content types. Useful for content categorization.
  • New ObservableMap class.
  • More…

Download it now.

Web site: /product/commons-lang/

Apache Lucene Web Site

For a client last year, we had to upgrade some old Lucene code to Lucene 4. Lucene 4 was a rather large release and there are many aspects to be aware when upgrading non trivial code. Let’s take a look at some of them.

(more…)

CodeNorconex Commons Lang is a generic Java library providing useful utility classes that extend the base Java API.  Its name is shamelessly borrowed from Apache Commons Lang, so people can quickly assume what it’s about just by its name.   It is by no means an effort to replace Apache Commons Lang. Quite the opposite.  We try to favor Apache Commons libraries whenever possible.   Norconex uses this Commons Lang library as a catch-all, providing all kinds of generic utilities, some of which have extra dependencies over the base Java API.  While this library is used by Norconex in its enterprise search projects, it is not tied to search and can be used in any context.

The following explores some of the key features it offers as of this writing. (more…)