cat page

This year, the annual KMWorld Conference took place from November 16–19 as a virtual event under the title “KMWorld Connect 2020.” It included the co-hosted conferences Enterprise Search & Discovery, Taxonomy Boot Camp, and Text Analytics Forum. I was privileged to join the conference for the first time.

Organizing one of the largest knowledge management conferences online must have been an endeavor. The web conferencing platform, PheedLoop, allowed participants to attend sessions from the four conferences as they happened and to chat and answer attendees’ questions at virtual booths. Audience questions appeared in real-time during presentations, and presenters were able to answer these questions at the end of the talk.

Obviously, one of the major shortcomings of online conferences is the lack of live face-to-face communication. Despite the virtual nature of the event, the quality of the content was above my expectations.

I would like to touch on some of the topics covered at the conference.

Taxonomy & Ontology

There have been great advancements lately in the knowledge management industry in taxonomies. As stated in many presentations, applied taxonomies have become commonplace at enterprises and in many cases have progressed to more complex knowledge organization systems such as ontologies. According to Wikipedia, “an ontology encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains.”

Knowledge Graph & Graph DB

Probably every fourth presentation at KMWorld mentioned or presented a business case using a knowledge graph term. Sometimes this has been referred to as an enterprise knowledge graph or EKG—not to be confused with the abbreviation for electrocardiogram—which reflects the industry enthusiasm for knowledge graphs.

In recent years, knowledge graphs have become more accessible to enterprises through advances in technology, specifically in implementing graphs more easily in graph databases, which are now capable of federating different content sources under one roof, whether behind a firewall or in the public domain. It would be appropriate to mention that Norconex has recently made available its new open-source crawlers for Neo4j—one of the larger names in the field of graph databases. Here you can find an example of Norconex’s crawlers being used to import wine varietal data from the web into a Neo4j graph database.

Semantic Search & ML

Ontologies implemented as knowledge graphs are key enabling technologies behind semantic search. Introduced by Google and currently getting increasing traction at enterprises, semantic search is a search method that infers user intent from context and content to generate and rank search results. A semantic search–capable system provides results that are relevant to the search phrase. The context of the searched words, combined with the content and context of the user browsing history and user profile, helps the search engine decide the result that best satisfies the query. I like an example illustrating the semantic search that popped up during one of the panel discussions. Two seemingly very close terms—“black dress” and “black dress shoes”—produce totally different results when searched on Google. This task is not easily achievable with a regular keyword-based search technique.

The recent advances in machine learning have considerably improved the abilities of algorithms to analyze text and other types of unstructured content. Creative use of advanced machine learning techniques has proven effective for supplementing semantic search. There were a few interesting presentations at KMWorld covering this topic to a great extent.


Overall, KMWorld Connect 2020 hosted a great deal of case studies, interesting discussions, and amazing insights that included  introduction of new resources, sharing of tools and strategies, learning from colleagues, and much more.

Norconex looks forward to participating in the event next year.

This year I was given the privilege to attend my first KubeCon + CloudNativeCon North America 2020 virtually. This event spans four days consisting of virtual activities such as visiting vendor booths, learning about Cloud Native projects, and exploring the advancement of cloud native computing.

The keynote started off by paying respects to the passing of the legendary Dan Kohn. Kohn’s influence has changed how we do online shopping to research on the internet and made ways for the new evolutions of The Linux Foundation and Cloud Native Computing Foundation for an exciting future for many generations to come, while supporting the creation of sustainable open source ecosystems.

There were glitches while live streaming from the virtual conference platform, which was to be expected due to the real-time heavy load test that is not desirable in any production environments. Fortunately, on-demand recordings of the presentations are now available.

Slack channels can be joined from to communicate with others from channels like #kubecon-mixandmingle and other KubeCon-related topics. This feature provides a great way to connect with the KubeCon audience virtually even after the event is over.

KubeCon provides many 101 learning and tutorial events about the service CNCF projects offer and how it can help with the 3 main pillars that I am involved with daily: automation, dev-ops, and observability.  Each of the pillar implementations are usually done in parallel, for instance, continuous development and continuous deployment required automation in building the pipeline, involves creating codes and having knowledge in operations architecture planning.  Once deployed, the observability of the services running would be required to monitor for smooth services deliver to users.  Many of the projects from the CNCF provide the services front to help create the development flow from committing code that gets deployed into the cloud services and providing monitoring capabilities to the secured mesh services.

At Norconex, our upcoming Norconex Collector version 3.0.0 could be used with the combination of Containerd, Helm, and Kubernetes and with automating the build and deployment via Jenkins. One way to get started is to figure out how to package the Norconex Collector and Norconex Committer into a container-runnable image with container tools such as Docker to run builds for development and testing. After discerning how to build the container image, I have to decide where to deploy and store the container image registry so that the Kubernetes cluster can pull the image from this registry and run the container image with Kubernetes Cronjob based on a schedule when the job should run.  The Kubernetes Job would create Pod to run crawl using the Norconex Collector and commit indexed data. Finally, I would choose Jenkins as the build tool for this experiment to help to automate updates and deployments. 

Below are steps that provide an overview for my quick demo experiment setup:

  1. Demo use of the default Norconex Collector:
    • Download | Norconex HTTP Collector with Filesystem Committer. The other choices of Committers can be found at Norconex Committers
    • Build container image using Dockerfile
    • Setup a Git repository file structure for container image build
    • Guide to build and test-run using the created Dockerfile
      • Demo set up locally using Docker Desktop to run Kubernetes
        • Tutorials for setting up local Kubernetes
  2. Determine where to push the container image; can be public or private image registry such as Docker Hub
  3. Create a Helm Chart template using the Helm Chart v3
    • Demo will start with default template creation of Helm Chart
    • Demo to use the Kubernetes Node filesystem for persistent storage
      • Other storage options can be used, for instance, in AWS use EBS volume or EFS
    • Helm template and yaml configuration
      • cronjob.yaml to deploy Kubernetes Cronjob that would create new Kubernetes job to run on schedule
      • pvc.yaml to create Kubernetes persistent volume and persistent volume claim that the Norconex Collector crawl job will use on the next recrawl job run
  4. Simple build using Jenkins
    • Overview of Jenkins build job pipeline script

I hope you enjoyed this recap of Kubecon!


More details of the codes and tutorials can be found here:



This was my first year joining the open-road Elastic{ON} Tour 2019 event in Toronto on September 18, 2019. My experience at this event was fully charged with excitement from meeting with Elastic architects, operations folks, security pros, and developers alike.

The event was hosted at The Carlu in downtown Toronto. In the morning, the opening keynote was presented by Nick Drost, Senior Director of Elastic, on search solutions such as app search, site search, and enterprise search, security using SIEM, and more. One of the most exciting keynote updates was about using Elastic Cloud on Kubernetes to help simplify processes of deployment, security, scaling, upgrades, snapshots, and high availability.

The next presenter, Michael Basnight, Software Engineer at Elastic, provided an Elastic Stack roadmap with demos of the latest and upcoming features. Kibana has added new capabilities to become much more than just the main user interface of Elastic Stack, with infrastructure and logs user interface. He introduced Fleet, which provides centralized config deployment, Beats monitoring, and upgrade management. Frozen indices allows for more index storage by having indices available and not taking up HEAP memory space until the indices are requested. Also, he provided highlights on Advanced Machine Learning analytics for outlier detection, supervised model training for regression and classification, and ingest prediction processor. Elasticsearch performance has increased by employing Weak AND (also called “WAND”), providing improvements as high as 3,700% to term search and improving other query types between 28% and 292%.

Another added feature to Elasticsearch stack is advanced scoring to help boost document query, using rank_features and distance_features. The new Geo UI uses map layers.

One of the most interesting new Beats to watch for is Functionbeat, which is a serverless data shipper that can subscribe to AWS SQS event topics and CloudWatch Logs, provisions the AWS Lambda function to ship data to Elasticsearch or Elastic Cloud Enterprise.

Elastic lightweight data shippers, Beats such as Filebeat for log files, Metricbeat for metrics, Packetbeat for network data, Winlogbeat for Windows event logs, Auditbeat for audit data, Heartbeat for uptime monitoring, and the latest Functionbeat for serverless shipper can be complemented with Norconex open-source products such as Norconex HTTP Collector or Norconex Filesystem Collector to crawl meta-data from the web or filesystem, then used with the open-source Norconex Elasticsearch Committer to push data to the Elasticsearch index, directly to Elastic Cloud Enterprise or the on-prem Elasticsearch Stack. Norconex can help with collecting meta-data from enterprise web architecture or enterprise filesystems for quick searching and to get relevant results.

Packed into the morning session, Jason Rhodes, Senior Software Engineer at Elastic, presented on unified observability, combining logs, metrics, and traces.

The afternoon session, Search for All with Elastic Enterprise Search and a Site Search demo and feature walkthrough, was presented by Diane Tetrault, Director of Product Marketing at Elastic. The latest UI gives the user the ability to configure content sources they search for and connect to their own data sources. Elastic Common Schema, introduced as an open-source specification, defines a common set of document fields for data ingested into Elasticsearch (

The Security with Elastic Stack session was presented by Neil Desai, Security Specialist at Elastic. He discussed the latest security capabilities to enable analysis automation to defend from cyber threats.

The Kibana and geo update features in Canvas and Elastic Maps were presented by Raya Fratkina, Kibana Team Lead at Elastic. Learning about ways to use these functionalities makes data more actionable.

I also learned tips at Elastic Architecture at Scale, a presentation by Artem Pogossian, Solutions Architect at Elastic. He discussed scaling from local laptops to multi-clusters and cross-clusters using case deployments.

A useful new feature in machine learning and analytics was introduced by Rich Collier, Solutions Architect and ML Specialist at Elastic. He demonstrated a use case using data frames, also called transforms, a feature that allows transformation of an existing index to a secondary, summarized index. Rich showed in a demo a possible use case from a digital retailer, using time series modeling to look for anomalies and forecasting in the shopper’s purchases, integrating Canvas UI designed in Kibana to build real-time data models. It was amazing to see the ability in demo to detect possible fraudulent purchases without having to be a data science expert.

Finally, after all these informational sessions, thanks to the Elastic event organizers for adding a closing happy hour, where I grabbed a drink with fellow attendees and Elastic folks. This was a great way to close a very extensive learning session. I look forward to being at the next year’s Elastic{ON} tour.

Event pass
Elastic{ON} Tour 2019 in Toronto event pass.
Elastic Team
On the right, Osman Ishaq at Elastic at the Ask Me Anything Booth
Raya Fratikina, Team Lead, Kibana at Elastic
Happy hour closing
Closing happy hour, drink with Elastic folks and other attendees.

Amazon Web Services (AWS) and the Canadian Public Sector organized another excellent Public Sector Summit on May 15, 2019. AWS hosted the first such summit in Ottawa last year, but this year’s event attracted a much larger crowd. Thousands of attendees filled Shaw Centre’s entire third floor.

In the keynote sessions, it was great to hear Alex Benay (deputy minister at the Treasury Board of Canada) talk about the government’s modern digital initiative. He discussed the approach, successes, and challenges of the government’s Cloud migration journey. Another excellent speaker was Mohamed Frendi (director of IT, innovation, science, and economic development for the government of Canada). He covered Canada’s API Store and how it uses the Cloud to make government data more accessible.

The afternoon session was led by Darin Briskman, an AWS developer evangelist. He talked about Amazon’s self-service analytics tool, called AWS Lake Formation, which combines data from multiple sources to resolve data-driven challenges in a timely manner. Machine learning and AI help in making informed decisions and solving problems. This service is a great fit for Norconex’s open-source crawler products HTTP Collector and Filesystem Collector, which fetch data from unstructured data sources to make it easy to consume. Collected content and metadata are natively stored in various existing repositories (or formats), including AWS-specific ones like Amazon Elasticsearch Service, Amazon Open Distro Elasticsearch, and Amazon CloudSearch, as well as many others, such as relational databases, Apache Solr, Google Cloud Search, Neo4J, Microsoft Azure Search, Lucidworks, IDOL, and more.


The diagrams below provide further explanation. The one showing the crawling spider is particularly exciting, because Norconex crawlers have much potential to help in this area.  See available Norconex Committers.



AWS Public Sector Summit Event Pass

Selfies with Darin Briskman, Developer Evangelist, AWS and Stevan Beara, Solutions Architect Manager, AWS.



10 Year Anniversary


Letter from the President,


Do you know that Norconex turned 10 this year? That’s right, Norconex was founded in 2007 and I could not be prouder to be president of Norconex as we cross this important milestone.

Our company’s numerous achievements would not have been possible without our amazing employees. They are smart, committed, loyal, and all have client satisfaction at heart. Having such a great team is precious beyond words.

I am also taking this occasion to thank every one of you, customers and partners, for having played a vital role in Norconex success. We can’t thank you enough for choosing our services and products, making us the success that we are.

We plan to keep growing our relationship in the years to come and continue to offer you the best.


We are looking forward to the next 10 years!




Pascal Essiembre



In this new business age that we all currently operate in the overall landscape sees shorter company lifecycles and much more exits, frequently and rapidly. Turning 10 is an enormous accomplishment for any company. Successful organizations know that many factors play a role; hard work, team dynamics, dedication and perseverance.

In fact, some of the key principles to longevity have helped Norconex navigate throughout the years.

Getting our start as a small professional services company 10 years ago the company has since set its footing as the specialist in enterprise search products and services.  We’ve also developed into providing professional support to customers for enterprise search and crawling solutions. As the cloud has become more secure and gained in popularity, Norconex began offering SaaS (Search as a Solution) and implemented our first fully hosted application.

Norconex also launched two search/discovery analytics products:

With thousands of users, Norconex made its mark in the open-source space by launching universal filesystem and web crawlers integrating with any search engine or repositories (such as Solr, Elasticssearch, HP IDOL, Azure Search, AWS Cloudsearch, etc.)

Allowing us to integrate seamlessly are two elite products from our line known as:

As industries changed and evolved over time, we eventually saw an important shift to open source search solutions. With that change Norconex has helped organizations convert from commercial architecture to open-source. Even as Google announced the discontinuation of their popular “Google Search Appliance” service our company has been consulting with GSA customers to help migrate their search needs to other platforms.

With the overall successful operation of our company for the past 10 years and with the implementation of key products and services, our organization has taken the steps necessary to give back to the community in several different forms. Since 2015 we’ve been supporting the movement in women’s soccer in Canada and became a proud sponsor of several young girl soccer teams near our headquarters.

The journey has been a fun ride with many learnings, successes and challenges along the way but we wouldn’t be able to be here without our amazing staff and clients. Thank you, and here’s to the next 10 years!!

Somewhere between the White House and the Trump International Hotel, between the anti-Trump, and anti-pipeline protests, there was another peaceful gathering in Washington D.C. last week… KM World 2016!  

This was the 20th anniversary of the event.  Norconex attended the Enterprise Search & Discovery stream, and it was obvious that the event has matured from the 20 years of experience with quality information sessions and vendor participation.

In talking Search, it was mentioned in several sessions that Search users want their Search to “work like Google”.  With Google employing tens of thousands of Search dedicated employees and the average company having less than one person dedicated to the same, it is no wonder that sometimes end users are left using a product that doesn’t fully meet their expectations.


The White House


Trump International Hotel














In many cases users are abandoning their Search application altogether to manually look for the content they need. This can cost a company in reduced productivity and in the case of online retailers, lost revenues.  But there’s hope!  With advancing technologies, dedicated vendors and Service Providers to work with, any company no matter the size can deploy a solution that works with their needs.

Some of the key areas of discussion I’d like to touch on in this article are Open Source, Machine Learning, the Cloud, User Interface, and Analytics.


Open Source continues to expand and is more and more widely accepted as a viable option for organizations of every size.  This can be to save costs on licensing fees, but also to provide more flexibility in how your Search is developed.   In some cases open source Search is being built alongside other products that include Search functionality (like Sharepoint) to enhance the Search experience beyond their standard offering.


Machine Learning has also come a long way, and a few vendors were on-hand to show off their products.  I was impressed with one product demonstration on how the Search results were displayed in an easily viewable chart format rather than a list.  However, it was said at the event that statistics are showing only 60-70% accuracy for these tools.   It was also said these products need very high query levels to reach the higher end of accuracy.  This means only the Search applications with thousands and millions of hits are getting full advantage of Artificial Intelligence today.  Assuming 60-70% relevancy is not enough, you will likely need some good old-fashioned human intervention to get the results to meet your expectations.

Also, if your organization is indexing all content, you may want to rethink this strategy and look at your content to determine what actually requires indexing.  It was said  that 60% of business data is not business data at all, but things like invitations to golf tournaments, pictures from the annual holiday party, duplicate documents or general user content such as personal emails that likely do not need to be included in your Search.  A Content Analytics tool can help you narrow down what content needs to be indexed to help with the relevancy of Search returns.


Another hot topic was moving your data and Search application to the Cloud.  The fear with moving to the Cloud had always been if your data will be secure.  Much like open source, organizations of every size are now embracing a move to the Cloud.  Many smaller companies who have limited IT resources are realizing that the big Cloud providers have security teams in place that can help their content actually be more secure than if they host on premise.   

The newer challenge around the Cloud is for multi-national organizations who have data in countries where data privacy laws are in place such as Europe’s Safe Harbour and more recently Russia’s data protection laws.  These legislations can regulate privacy, where their data can be stored, and also how and if that data can travel outside of the country.  Multinationals need to find a strategy to work with these laws potentially piecing together various Cloud providers with data centres in the countries in question, or doing a hybrid of Cloud and on premise.  


Once you’ve built out your Search infrastructure, what your end users see is the User Interface and the results that are displayed for their queries.  Rather than having a “Search Page” more and more companies are integrating the Search UI into their core user applications so the users don’t have to “search for the Search”.  

If you are going to include a user feedback option, best participation was recorded when the feedback was put near the Search UI,  but you will often get limited responses.   This is where Search analytics comes into play… taking user feedback (if available) along with information from your Search users behaviours to keep a pulse on how Search is performing and if your users are finding the content they were looking for.  A good Search Analytics product can help you to organize your Search data in a dashboard view, and provide an overall health-check to give you quick insights into where your Search is working, and where it needs some intervention to keep your Search running at an optimal level.

Regardless of whether you implement Search in-house or hire a team of experts, with all of the advancement in Search technology, you can put together all of the right pieces to provide a great Search tool for your employees and customers.

Since the first FIFA Women’s World Cup in 1991, interest in playing and watching women’s soccer has only increased. Around the world, more girls than ever before are playing the beautiful game that not only provides obvious health benefits but also helps boost girls’ confidence and self-esteem at the time in their lives when they need it most.

Norconex is proud to renew its sponsorship of women’s soccer teams in the Association de Soccer de Hull (Gatineau, Quebec, Canada) for the 2016 season. In addition to renewing its support for five local teams with players between 10 and 16 years of age, Norconex now sponsors two competitive women’s teams (U12 and U15).

At the upcoming women’s soccer tournament in this year’s Summer Olympics, girls will be able to cheer for their soccer idols once again, and Norconex will be cheering along with them.

In collaboration with .

Solr committers present at the event

This year’s conference was held in Austin, Texas on October 15-16, 2015. It gathered around 600 Lucene and Solr enthusiasts from 26 countries, including many of the Solr committers. Pascal Dimassimo and Pascal Essiembre attended the event on behalf of Norconex. While the talks were varied, there were a few recurrent themes such as search relevance, analytics, and infrastructure scaling. The following relates the experiences of the attendees with the content of conference sessions they attended. These talks should become available for viewing on YouTube shortly.


There were at least 10 talks related to the topic of relevancy alone. They offered ideas on how to improve relevancy, including intent detection, using machine learning principles, fuzzy matching, and more.

Of those standing out, Trey Grainger (co-author of Solr in Action) showed us how he created a knowledge graph built on top of Solr to improve results.

Another noteworthy presentation came from Michael Nilsson and Diego Ceccarelli of Bloomberg, who broke their documents into features and use a matrix to decide the ranking of each feature. They reminded us there is nothing wrong with doing multiple passes to Solr to better serve up search requests.


Whether it was analyzing search logs or user search behaviors, developers are working hard to build powerful analytics capabilities within Solr. Kiran Chitturi of Lucidworks suggested an easy way to capture user events using Snowplow JavaScript event tracker. He also highlighted the potential benefits you can get when sending those events to their new LucidWorks Fusion product.

Kiran Chitturi discussing events processing in LucidWorks Fusion
Kiran Chitturi discussing events processing in LucidWorks Fusion

Yonik Seeley, co-creator of Solr and now Solr Dude at Cloudera, presented us the new Solr JSON Facet API. This new API (which is actually available in Solr 5.3) has been completely re-written for Solr 5 and allows for first-class analytics support. You can now easily have nested facets, metrics and statistics. This is similar to Aggregations in Elasticsearch. According to the numbers presented, this new facet module performs much better than the original Solr facet module.

Erick Erickson presented the new Solr Streaming Aggregation API (also available in Solr 5.3). Solr has never been very good at accessing lots of search results because of deep paging issues and memory requirements. However, this new API builds on the existing exporting capabilities to allow us to stream concentrated data out of SolrCloud with new possibilities, like memory-efficient set operations (union, intersection, complement, join and unique). It also introduces new worker collections on the SolrCloud cluster to handle this processing. The goal is to build a general purpose, distributed computation framework right on top of Solr. This is still a work in progress, and the next speaker, Joel Bernstein, showed us what we can expect next. Leveraging the Streaming Aggregation API and JSON Facet API, Solr 6 should offer us a very powerful feature: SQL queries over Solr!

For those using Spark, LucidWork’s Timothy Potter introduced us to the tool they’ve built to use Solr as a Spark SQL DataSource. This allows Solr to be used with an existing Spark analysis pipeline. This tool also permits the writing of data into Solr from Spark.

Infrastructure Scaling

Shenghua Wan and Rahul Gupta sharing their experiments
Shenghua Wan and Rahul Gupta sharing their experiments

Shenghua Wan and Rahul Gupta from WalmartLabs described their experiences using different technologies to perform distributed indexing.  They experimented with MapReduce, Hadoop and others to distribute and enhance their XML data across several Solr shards, merging those shards in the end.

Riak’s developer Fred Dushin showed us Yokozuna, their new implementation of Riak Search. Riak is a distributed key/value store and with Yokozuna, Solr brings search to Riak. But Yokozuna also brings something to Solr. Because of its distributed nature, it makes it possible to use Riak to distribute Solr instead of using SolrCloud.

Mark Miller, Software Engineer at Cloudera, told us that open-source technologies have taken over the search ecosystem, especially Solr and Lucene. In the future, those search engines will get integrated with multiple systems. Cloudera wants to integrate Solr with Hadoop. Miller claims that at the moment, Solr search at scale is still flaky, even with SolrCloud, thought he admitted that it is good enough for general usage. According to Miller, Hadoop can help, so his firm created Cloudera Search, which uses Solr and Hadoop together.

Other Topics

The aforementioned topics were not the only ones covered at the conference. There were others of varying technicality. Toke Eskildsen, representing the State and University Library in Denmark, gave a low-level and very interesting talk about facet optimization. He demonstrated the code improvements he made to improve Solr facet performance and achieve impressive benchmark results.

Pascal Essiembre enjoying Autsin
Pascal Essiembre enjoying Autsin

David Smiley, who has long been involved in all things related to Solr geospatial research, showed us the latest work on spatial 2-D faceting, also known as heat maps. He also took the time to retrace the history of various geospatial functionalities in Solr and Lucene.

We’ve only scraped the surface of the conference proceedings at the Lucene/Solr Revolution 2015. We also thoroughly enjoyed the hospitality of the city of Austin, a community which offered a warm welcome and many wonderful sights. We hope our experiences stimulate further interest among others in attending future conferences, and we welcome further inquiries regarding our experiences in Austin.


In the wake of FIFA Women’s World Cup 2015 starting in Canada on June 6th, many young girls are anxiously waiting to see their favorite players compete for their country. Soccer (or “football” outside North America) has been the most practiced sport in Canada by kids from age 5 to 14 for over a decade now. Girls represent over 40% of these young players.

Norconex is proud to help attract and retain even more girls to play the world’s most beloved sport by sponsoring 5 girl-only teams from U10 to U14 part of the “Association de Soccer de Hull”.

We can’t predict who will be the next Christine Sinclair, but we hope an increasing number of girls will enjoy soccer and even dream to be on the national team one day.

If you’re in Canada this June (or early July), make sure you have your World Cup tickets. Matches will be played in 6 cities across the country.

Go girls!


ASH U12 Girl Soccer Team

ASH U12 D1 Girl Soccer Team


The scene at GTEC brought another exciting year. GTEC is Canada’s Government Technology Event. As usual, there were many exciting keynotes, presentations, panel discussions and informative vendor exhibitors.

GTEC offers a great opportunity for buyers and clients to get out of the office and personally talk to vendors. It’s also an opportunity for vendors to talk to clients about their products and services and gather leads. GTEC is the one event that gets all the key parties under one roof.

Enterprise Search – Still very much a core issue within the GoC

At the Norconex booth, we had the opportunity to talk to many government employees. When we discussed what we do, the general response appeared to support the fact that search is still absolutely imperative for knowledge workers within the Government of Canada.

Unfortunately, though, reality in public servitude doesn’t offer that same support. Search seems to be low on the list of priorities and doesn’t appear to be getting the attention it deserves. The number of public servants who indicated that they were dissatisfied with the quality of their internal search was surprising.

They understood the importance of their document management systems and why it was necessary to keep stored information organized. However, they emphasized the need to find information without knowing exactly where it is located. They wanted more attention spent on how to get content “out” in order to leverage that information, enabling them to do their jobs more efficiently.

GTEC 2014 IPad draw

And the winner is…

Norconex is pleased to announce that the winner of the iPad mini is Douglas North from Shared Services Canada. We were lucky enough to have an employee from Canada Revenue Agency pull the winning ballot.