Archive | Search RSS feed for this section

2013 KM White Paper Released

19 Jul

20130719-121241.jpg

Post by David Hobbie, Goodwin Procter and Blogmaster, ILTA KM Blog

The 2013 KM White Paper “Knowledge Management:  Intelligent Business At Its Best” has been released! Organized by Mary Panetta of ILTA’s KM Peer Group Steering Committee, this annual publication has an in-depth look at a wide variety of traditional and cutting-edge knowledge management topics.

I was particularly impressed by the scope of topics in this issue. The articles collectively demonstrate KM’s centrality in addressing new legal business issues such as pricing, legal project management, and big data, as well as the vitality of new approaches to traditional knowledge management concerns such as precedents management, document automation, and portals.

In “KM Professionals: A Natural Fit for LPM,” Lisa Gianakos of Pillsbury Winthrop Shaw & Pitman LLP, shares the results of some surveys of Legal Project Management initiatives and knowledge management professional in (mostly large) law firms. She found that the majority of the firms who responded to the survey (75%) had either formal or informal LPM programs, about the same amount as had KM programs. Where respondents had LPM and KM programs, KM was involved in LPM 59% of the time. Along with other analysis of LPM in law firms, she also shares many (anonymized) comments about how LPM programs got started and how KM interfaces with, supports, and sometimes manages LPM programs.

In “The Pricing Professional’s KM Toolkit,” Chris Emerson and Amy Wu of Bryan Cave LLP argue that professionals who are responsible for pricing and budgeting must understand a firm’s KM assets in order to excel at their work. (I have been making the same argument from the other side, that KM professionals have a tremendous amount to contribute to pricing and budgeting efforts.) The authors cover key KM resources, most notably matter experience databases, and how they can be leveraged specifically for pricing work. They also reveal another impressive Bryan Cave innovation, custom software which essentially uses a trainable probability-based software engine to greatly speed up the time required to analyze historical time entries and how they might fit into a phase-task coding framework, in similar fashion to predictive-coding eDiscovery software.

In “Big Data, Predictive Analytics and Social Consumerization: Big Hype or Big Opportunity,” KM Distinguished Peer Eric Hunter of Bradford & Barthel and Spherical Models argues that the legal industry needs to take a lesson from the social consumer companies’ use of predictive data analytics. He sees opportunities for improvements in data management, staffing, pricing, and client service. For instance, his firm uses data analytics to assess attorney performance for specific personal injury defense clients, taking into account factors such as the level of injury, doctor involved, and the like. They hope to move towards outcome prediction, both in terms of settlement payouts and litigation costs.

In “Another Look at Precedent Management“, Boston-area colleague Marybeth Corbett looks at precedent systems and Wilmer Hale’s efforts to incorporate some cutting-edge document drafting and assembly tools into its practice. I agree with you Marybeth that effective precedent systems are nothing to be ashamed of these days! She includes a useful set of key questions to ask about a precedent management blueprint.

In “Document Factories: Building Document Automation Tools,” Anthony Kikuta of Wilson Sonsini Goodrich & Rosati LLP lays out criteria for selecting document assembly or automation in great detail. He addresses the range of features available in packages such as Contract Express, and spells out how to get the most out of your document assembly tools.

In “KM Standards In Practice,” KM PG Steering Committee member Andrew Baker and Dustin Robinson, of Seyfarth Shaw LLP, relay their experience with the document analytics tool “KM Standards,” summarizing it as a powerful but imperfect tool, the “Swiss Army Knife With A Slippery Handle” of precedent work. (Another firm’s experience with this technology was recently addressed in an ILTA webinar covered here.)  They see a very broad range of potential uses, but have focused on three; creating clause/precedent collections, client-specific content management, and benchmarking. They share valuable lessons about how to work with this tool to increase client value.

In “Using Design To Improve KM,” KM practitioners Andrea Alliston and April Brousseau of Stikeman Elliott and Tangledom consultant (and known design expert) Kate Simpson tell about out a “lawyer-centric design process” that in their “CLE Manager” case study significantly improved software development. They argue that the goal of the design process should be to convey to the IT designers a rich understanding of end user needs and tasks so that “at last we [speak] the same language.” KM practitioners are uniquely situated to be able to convey that understanding. Helpful charts contrast traditional software development with a design-centric approach.

In “Experience Matters at Dechert,” Kelly Breslin and Julie Ketover cover their firm’s development and incipient rollout of “DechertEXP,” an experience management system. They’ve effectively laid out the importance of tying together the many sources of information, collected at different points in the matter life-cycle, for complete experience coverage. A sidebar with “Top 10 Takeaways” has some succinct and doubtless hard-earned lessons.

Lastly, in “Creating an International Client-Facing Knowledge Website,” Jellienke Stamhuis of Ius Laboris and Richard Lister of Lewis Silkin LLP in England cover their successful efforts to launch Ius Laboris’ international client-facing KM portal addressing human resources (HR) issues. I was especially impressed with how far they advanced from where they started and by the personalization feature whereby counsel can select an HR issue and choose several countries, and receive a comparison of the laws on that issue in those countries.

Book Review–Martin White’s “Enterprise Search”

20 May

1-Search BinocsPost by David Hobbie, ILTA KM Blogmaster and Litigation Knowledge Manager, Goodwin Procter LLP

Long-time digital information management guru a/k/a “information scientist” Martin White has authored a comprehensive guide to enterprise search that lays out search’s business and technical context, and also provides dozens of tips, suggestions, and the location of traps for the unwary, at a surprisingly affordable price (US $19.99 /$20.99 CAN.) I don’t want him to raise the price, but I would have paid the cover price for the insights found in each of the book’s twelve chapters. (“Enterprise Search”, O’Reilly Media, 2013, ISBN 978-1-449-33044-6, 168 pp.). It is targeted at a general corporate / corporate IT audience, and does not specifically address legal industry needs or vendors.

If you read anything in this book, read chapter 12, “Critical Success Factors.” My experience with search implementations and trials suggests that each of the factors is, as they say, “written in blood.”

Mr. White suggests that enterprise search is moving from a “nice-to-have” to a “need-to-have”, not because of the enterprise search industry’s growth, but because of business decision support needs and the increasing demands of expanding volumes of digital information [Ed.–the legal industry has a comparatively high focus on documents and knowledge, and enterprise search is correspondingly more valuable, and, I suspect, more prevalent, here—see for instance the 2010 ILTA KM Survey]. He is not Panglossian about the success of existing implementations, noting that internal clients expect speed and reliability akin to Google, which enterprise search cannot deliver.  Google after  all has invested tens of billions in search and hardware and can leverage hyperlinks and user behavior in a manner not accessible to enterprises.

I really value Mr. White’s focus in much of the book on meeting business needs. He dives into the process of developing user requirements in some depth, addressing a broad range of investigatory techniques to uncover specific information-seeking use cases. Mr. White does not hide his opinion about these techniques; he is skeptical of focus groups, and very fond of individual interviews.

There is much here that is thought-provoking and challenges what may be IT or user orthodoxy. For instance, he does not view successful search as a single all-encompassing application, but as a process that identifies and then meets individuals’ business-related information needs. This idea ties in to the concept of “Search-Based Application” suggested by Sue Feldman and also (more tangentially) by Lynda Moulton. Under this approach, search seeks to address the information needs of a specific business process or scenario, one at a time, rather than implementing one search engine or interface that meets all needs.

A corollary of viewing search as a process is that he suggests staffing a “search support team” in advance of assessing and developing enterprise search, to get the most out of existing search applications and to lay the groundwork for additional search efforts.

Mr. White’s book effectively addresses many aspects of implementing and maintaining an effective search environment, a critical aspect of effective enterprise information management.

ILTA Webinar On Enterprise Search For Smaller Firms

20 Mar

Post by ILTA KM Blogmaster David Hobbie

A webinar in one week will address a cheap and effective approach to enterprise search, the “killer app” of legal knowledge management. ILTA KM surveys and my own experience confirms that this remarkable tool is moving “down-market,” i.e., to smaller firms.

Title:  Kick-Starting KM with Quick-Start Search

Date & Time:  Wednesday, March 27, 2013 at  7:00 p.m. GMT / 2:00 p.m. EDT / 1:00 p.m. CDT/
12:00 p.m. MDT / 11:00 a.m. PDT.

REGISTER online here; no cost for ILTA members

Formal Description:
KM is no longer the purview of large firms. KM can be a competitive tool, and smaller firms want in! But how can a smaller firm, without significant KM resources, kick-start a KM program and score a major win with the firm’s attorneys?

Smaller firms looking to take advantage of the benefits of KM can use enterprise search as the centerpiece of their KM strategy. You could be up and running fairly quickly, and see a high return on your investment. Your attorneys will love it, and you’ll establish enough “credit” to push for other KM initiatives.

Advancements in KM technology point to closer collaboration between CIOs and KM professionals. Come hear Lowenstein’s CIO, Christopher Zegers, and KM Director, Kitty Schweyer, discuss their unique approach to KM in a smaller firm and how they kick-started their KM program with the implementation of a popular enterprise search solution.”

Speakers:

Shy Alter is the founder of ii3, where he provides vision and direction to both ii3 and ii3’s clients. Shy also regularly works with ii3’s teams to develop practical solutions to complex knowledge and information management challenges. He helps his clients address their highly competitive and continually changing business environment. Shy is a KM pioneer who speaks regularly on the strategic impact of knowledge management and is recognized as a thought leader in this area. Contact him at salter@ii3.com.

Kathlyn Schweyer has 17 years of experience in the information and knowledge management field. Kitty is currently the Director of Knowledge and Research Services at Lowenstein Sandler and is in charge of the firm’s knowledge activities. Previously, she served as the manager of competitive intelligence at White & Case in New York and as the library manager at Goodwin Procter in Boston, where she was heavily involved in their knowledge management initiatives. Contact her at kschweyer@lowenstein.com.

Christopher Zegers has been solving problems for law firms since 1997. From bringing the Internet to desktops and bringing desktops to iPads, he has guided attorneys through the endless technical changes required to keep firms competitive. Chris has built out new offices and data centers, and has introduced knowledge management and practice support departments to law firms that traditionally grouped these services with IT. He is currently the Chief Information Officer for Lowenstein Sandler. Contact him at czegers@lowenstein.com.

Questions?  Contact Kristy Costello at kristina@iltanet.org or (512) 795-4674

Enterprise Search Implementation, Step 5: Pilot Testing

21 Aug

Guest post by John Gillies, Director of Practice Support, Cassels Brock

This is the fifth in a series of posts about the process of choosing and implementing an enterprise search engine in a law firm. The others have addressed, in order:

In this instalment, I’ll look at the pilot testing that you’ll want to engage in once you have passed through the first four stages. (You can carry out your testing on a limited basis while carrying out your database cleanup, but can’t do accurate testing with a group beyond just you and your immediate team members until the cleanup is done.)

There should be two aspects to your testing: the mechanics of your testing, and the actual testing itself (in other words, the “how” and the “what”). I’ll look first at the actual mechanics.

Mechanics of testing. One of the most important things you can do is to ensure that you have a logical and well-documented test process. It should of course reflect the usual quality assurance (QA) testing done in connection with any software adoption. But it is crucial that your QA process here be tailored to reflect your testing of an enterprise search engine. There are three steps here.

  • First, prepare a set of what you expect to be standard search queries.
  • Next, prepare use cases based on those search queries.
  • Finally, establish a formal process to document the results of each use case test. You will see why, in the discussion of the aspects that you are testing for, it is important to be able to refer to the results of previous testing to come to certain conclusions.

What you’re testing for. There are then three key performance aspects that you will want to assess as part of your testing process, namely

  • consistency
  • speed
  • relevance

Consistency. This is the most straightforward aspect to test. You’ll want first to confirm that the same search string delivers up the same results for the same user over time. In other words, the fact that a search on Day 1 was satisfactory is insufficient in and of itself. You’ll want to confirm that on Days 5 and 10 your engine delivers the same result, in the same order (not including, of course, new content that has been indexed in the meantime).

Second, you’ll want to ensure that users with the same profile get the same results regardless of where or how they are logged on (in other words, whether they are on different desktops or in different offices). Note also that if you have adjusted the weighting according to the user profile, you have to be very careful to ensure that you compare apples to apples.

Speed. Testing for speed is more difficult, since most users will assume that Google is the benchmark and that they should be getting results in nanoseconds. Here’s where you’ll need to manage expectations.

I understand that response times for enterprise search generally tend to range somewhere between five and seven seconds (although longer, of course, if the user is logging in remotely). If this is your first enterprise search implementation, however, users will likely be delighted with those sorts of speeds if they are getting good, solid results. Ultimately, though, most of your legal professionals look only to the relevance of the results returned when judging your search engine.

Relevance. Relevance is the most difficult aspect to test properly prior to launch. As noted above, you will want to establish use cases of the different categories of searches that you anticipate your users will be conducting after launch. The difficulty, of course, will be trying to assess in advance what those categories of search will be and what particular queries your typical users will run.

At this point, it may be useful to note the difference between relevance and precision in search testing. The best description of the difference between the two that I have read is the following, from an article entitled Testing Search for Relevancy and Precision, by John Ferrara:

Precision is the ability to retrieve the most precise results. Higher precision means better relevance and more precise results, but may imply fewer results returned. For a query, recall means the ability to retrieve as many documents as possible that match or are related to a query.

Recall may be improved by linguistic processing such as lemmatization, spell-checking, and synonym expansion. In information retrieval, there’s a classic tension between recall and precision. Specifying more recall (trying to find all the relevant items), you often get a lot of junk. If you limit your search trying to find only precisely relevant items, you can miss important items because they don’t use quite the same vocabulary.

Getting the balance right between precision and recall is more art than science and is one of the areas where input from consultants who have engaged in other search implementations will prove particularly valuable.

In trying to get this balance right and running what we expect to be “typical” queries, we knowledge management professionals tend to over-estimate the sophistication of our users. In my firm, a review of the search strings that users actually ran (which we could of course only examine after launch) showed that most users tend to use only a few words, and generally without using quotation marks to search specific terms. I suggest that you prepare your test cases accordingly.

The best course is to go to several lawyers who you know are supportive of your project and review your draft use cases with them, to confirm that these are the types of searches that they might reasonably anticipate running.

Next, you need to get your pilot testers to run those use cases with you standing over their shoulder, watching what they actually do and recording their results. This will serve two purposes. First, it will allow you to see how users actually use the search engine in “real life.” You’ll also see the “mistakes” they make, and be able to adjust your training accordingly.

Second, once you launch your engine, you will want to go back and run the same searches from those use cases and confirm that you get the same results as when you were in pilot testing. It’s possible that those results may vary after launch and, if so, you’ll want empirical data to study and take to the vendor, if needs be.

Once you’ve done your testing, tweaked your settings, and made all the other technical, behind-the-scenes changes you need to make, you’re ready for roll-out. That will be the subject of my last posting in this series.

Enterprise Search Implementation, Step 4: Database Cleanup (Hiding What You Shouldn’t Be Able To Find)

30 Nov

Guest Post By John Gillies, Director of Practice Support at Cassels Brock

This is the fourth in a series of posts about the process of choosing and implementing an enterprise search engine in a law firm. The first addressed Establishing the Business Requirements and the second looked at Picking the Right Search Engine. The last one looked at the Proof of Concept stage, which is where you put your selected engine to the test and ensure that it performs as expected in your environment. Assuming that it passed those tests and the decision has been made to proceed, the next hurdle is cleaning up the databases you will be indexing.

As part of your strategic planning, you will have decided which databases those are. The primary advantage of indexing two or more databases is that users are able to see aggregate results brought together that would otherwise have to be searched separately. The main disadvantage of doing so is that you will have to ensure that, in mixing apples and oranges (as it were), the results are displayed in a way that users can understand and use. In the initial roll-out of enterprise search at our firm, for example, we opted to index only the documents in the document management system (DMS). We did this so that we could start with focused content, train users on using the tool for that content, and then slowly build the available content.

Among the databases commonly indexed for enterprise search are those in the accounting system, the DMS, the library catalogue, the KM/precedent repository, relevant content on the firm intranet, and the legal updates on your firm website. While indexing the content in the four items on this list should be fairly straightforward, indexing the accounting and DM systems poses their own challenges.

Accounting

Indexing the accounting system requires you to make policy decisions as to who will be able to see what content. For example, can all users search the financial data? Only certain users? All the accounting data or only certain segments? Furthermore, from a usability perspective, while the search engine offers the ability to deliver all the content that corresponds to the search criteria, you may wish to narrow the financial data indexed so as not to overwhelm the user.

One issue to address is whether to index time entry narratives. Those narratives may provide very relevant information, particularly when identifying internal expertise. The question is whether the firm wishes to expose this information to all users. This is one area where the solution is not all or nothing. You may choose, for example, to index this data and use the results for determining relevance, without displaying the actual content.

Document Management System

You will have several concerns with indexing your DMS content. First and foremost you will need to ensure that your confidentiality screens effectively deal with relevant content. This works both ways. In other words, those behind a screen need to be able to find content that they are entitled to view, and those outside the screen need to be blocked from seeing any of that content.

Dealing With “Sensitive Documents

It is, however, the problem of “sensitive” documents in the DMS that will prove to be the most vexing. “Sensitive content” may include, for example, confidential memos from firm committees, memos regarding partner allocations and associate compensation, performance reviews, and so forth..

(You may wish to review the PowerPoint slides done at the ILTA 11 presentation entitled Managing Risks Associated with Enterprise Search, which was a panel composed of Lisa Kellar Gianakos, the Director of Knowledge Management at Pillsbury Winthrop, Rizwan Khan, the Vice-President of Customer Service at Autonomy, and me.)

Typically, in the process of implementing enterprise search, firms discover that sensitive content that should not, for whatever reason, be public has in fact been filed in a publicly accessible part of the DMS. Until that point, that content had not really been available because, realistically, users would have been unable to find it (colloquially referred to as “security through obscurity”). With the advent of better search, that approach is no longer possible.

One way to start finding and securing this content is to draw up a list of “dirty words”. You may wish to begin by referring to the terms on List A that formed part of our ILTA presentation (which are also reproduced as an appendix at the end of this article).

This slide from our presentation shows the most frequently recurring “dirty words” as a tag cloud:

Dirty Word Tag Cloud

You will, however, need to exercise discretion when reviewing the results that a search for these terms returns. For example, while it might seem logical to search for curse words, they are frequently used in e-mails and other documents that are sent to the firm and in court transcripts, so you should not set up a absolute rule to exclude these terms.

Consider searching from some or all of the following:

  1. Terms related to the payment of personal income taxes (e.g., where a lawyer has saved to the DMS letters related to the amount and/or payment of personal income taxes).
  2. Wills and related documents such as “last will and testament”, “living will”, and related terms, such as “life support”. Do the same relating to family law matters, like “divorce”, “separation”, “alimony”, “cohabitation”, etc. (The exact terms will depend on the terms used in your jurisdiction.) Note, however, that if your firm has an estates or a family law practice, a number of these terms may legitimately form part of client files. If firm members have used the services of either the estates or family law group, ensure those files are protected.
  3. Names of firm committees such as “executive committee”, “management committee”, etc. Confidential e-mails to and between committee members are not infrequently filed in publicly accessible locations.
  4. Terms like “cottage”, “country house”, or whatever people may call their secondary residence.
  5. Within personal matter numbers of firm members (if you have such numbers), although there may be relevant public material there such as conference papers, articles, publications, etc.

Check with your Finance and HR departments to find out what terms they would search for. Also, seek suggestions from your pilot group, since they may well come up with terms that your implementation team will not have thought of.

This is perhaps a good opportunity to determine whether any of your internal policies (for example, on confidentiality screens) or external policies (for example, relating to the protection of personal information) need to be updated or whether more internal training is needed.

Understand, as well, that this process should be iterative. Even after you are confident that you have plugged the leaks in the dike, you should continue to do different searches to ensure that you have stopped as much as you can. Consider setting up a reminder system to test these issues post roll-out.

Particularly in the first few months after launch, you will want to review reports of the search terms that users have been using, in part to get a sense of what user behaviour actually is (as opposed to what you’ve assumed it will be!) but also to determine whether users are using terms you had not thought of that might turn up other sensitive documents.

When setting expectations as to implementation, you should be aware that your testing for “sensitive” documents may end up being the most time-intensive portion of your project. Depending on your variables (primarily, the number and size of the repositories you will be indexing), you will want to devote several months to ensuring that you are satisfied with the results that users will be seeing. You will want to avoid any unnecessary bumps at the outset, since that can impair the impression of the search engine you will have spent so much time preparing for!

When you are satisfied on this point, you are now ready for pilot testing, which is the topic I will treat in my next article.

Appendix: “Dirty word” list

  • Associate reviews
  • Bonus allocation
  • Bonus decision
  • Bonus structure
  • Charitable contributions
  • Charitable donations
  • Department budget
  • Direct deposit
  • Discretionary bonus program
  • Equity partner
  • Operations committee/Executive
  • committee
  • Partner admission
  • Partner compensation
  • Partner remuneration
  • Partnership admission
  • Partnership issues
  • Partnership meeting
  • Performance review
  • Performance review
  • Promote/promotion
  • Resignation
  • Staff bonus
  • Termination letter/letter of termination

Share

Enterprise Search Implementation, Step 3: The Proof of Concept

16 Sep

Guest Post By John Gillies, Director of Practice Management, Cassels Brock

This is the third in a series of postings about the process of choosing and implementing an enterprise search engine in a law firm. The first addressed Establishing the Business Requirements, while the second looked at Picking the Right Search Engine.

In my previous posting, I talked about the process that should have resulted in your choosing the search engine that (ideally) best meets your needs. The next stage, the proof of concept, is where you put it to the test and ensure that it performs as expected in your environment.

The following is the definition of “proof of concept” from Technology.org: “A proof of concept [PoC] is a test of an idea made by building a prototype of the application. It is an innovative, scaled-down version of the system you intend to develop. In order to create a prototype, you require tools, skills, knowledge, and design specifications.” (I don’t know if the two terms are related, but for me “proof of concept” brings to mind the often misquoted phrase, “the proof of the pudding is in the eating.”)

Essentially, in the PoC you and your IT colleagues are looking to see whether the engine not only does what the vendor has promised, but whether it also does what you need it to do, in the way you need it done, and does so properly in your technical environment.

Generally, this will involve loading the search software in a test environment, indexing a small percentage of the documents that it will be searching, and running tests to determine how it responds.

While your due diligence up to this point will have included confirmation that the engine should perform in your IT environment, now is the time that the nitty gritty testing will take place. (Since every firm’s IT infrastructure is unique, it’s important to do this testing before proceeding any further.)

You will have identified, in your business requirements document, the various document repositories that you will be indexing (such as the DMS, your accounting system, the intranet, etc.). Once the initial technical testing is done, you will take an appropriately sized “slice” of each of those repositories to index. Ideally, a representative mix of document types, sizes, and security profiles would be included. This will provide the raw data that will be used for the rest of your PoC testing as well as for pilot testing. You will want to make sure that the resulting data subset will go together to provide meaningful results. You may wish, for example, to index documents published within the last 30 days. You will need to be mindful that, as a result, there may be content that pilot testers might expect to find but is not in fact in the PoC database, so it will be important to manage expectations at that time.

Once your various data repositories have been crawled and indexed, you’ll need to set up the user interface so that it displays properly, and then set up and test the security modules. One of the first questions that lawyers will ask is whether the search engine respects the permission walls erected around sensitive information. This should be straightforward to confirm for DMS documents, but if you are including other repositories, particularly for your accounting information, you will want to pay special attention to this issue. Nothing will sink acceptance of your search engine faster than the discovery that users are suddenly able to access documents that should be hidden from them.

This will be the point where you should determine when you are going to conduct your database cleanup. While I will deal with this issue in more detail in my next posting, it’s important to note that you will find that there are a number of sensitive documents that currently exist but are essentially hidden because your current search tools are inadequate to reveal them. (This is referred to as “security through obscurity”.) Since you will have to conduct this cleanup before launch, your question is whether to do it now, before the pilot, or afterwards.

Now you will finally be at the point where you can start some serious testing. Your colleagues in IT will need to carry out their technical tests while you use your business requirements document to test the four key aspects of the search engine, namely

  • relevance
  • responsiveness (i.e., speed)
  • consistency
  • proper working of the key functions

You should know that responsiveness is difficult to test in the PoC, because it’s not really until a full index of your data sources is performed, and released into production, that you’ll know how fast it is. The other three aspects, however, are what you should be focusing on.

As to the third bullet, the search engine should consistently apply the established rules for weighting, ranking, and security criteria on a user by user basis. This may result in different results, but it demonstrates a consistency in the expected user experience regardless of how the user is connecting to the engine.

It will be very helpful if you develop use cases to test these aspects. Develop various types of typical searches that you expect different users to conduct, then carry out those searches and record the results. It’s useful in this context to develop personas (e.g., first year associate who knows nothing, experienced senior associate managing a deal, partner in a litigation matter, assistant doing a search on behalf of his or her lawyer, etc.). With your knowledge of the business requirements, you should also develop test cases that highlight what a specific type of user should “not” find (due to data source, or document security). You will want to keep these use cases for more testing during the pilot phase.

Assuming everything has gone well in your PoC, you will be ready to accept the software, engage in the database cleanup (if you haven’t done so already), and proceed to your pilot testing.

Share

Enterprise Search Implementation, Step 2: Picking The Right Search Engine

6 Jul

Guest Post by John Gillies, Director of Practice Support at Cassels Brock

In my previous posting on implementing a search engine in a law firm, I focused on the first step of the process, namely Establishing the Business Requirements. Getting a detailed list of your business requirements is the essential starting point, because you will use it to compare the features of your “finalists.”

Having got to this point, your next hurdle is figuring out which search engines you’re going to want to test. You may well choose at some point in the process to involve an outside consultant with experience in search engine implementations to help guide you through the process. We found the help of our consultant (Joshua Fireman from ii3) to be invaluable. If you haven’t done so before now, this would be a good time.

In determining what sort of search engine you’re looking for, you’re faced with two choices, namely to restrict your search to the engines that have been customized for the legal market, or to look at engines that have been designed for the general market (for the Fortune 500 crowd, if you will), knowing that you’ll need to do a fair bit of customizing to address the many unique aspects of a legal environment.

You should only consider going the second route if you have reliable support on the technology front so you’re confident that your business requirements can be realized in your environment. For example, if integration with your DMS is an important requirement, will you actually be able to optimize your “Fortune 500 search engine” to do that effectively? Also, how much ongoing coding work will need to be done so it continues to function properly in your environment as it is upgraded? Many firms choose the first route simply because they do not want to rely so much on variables, many of which are beyond their control.

We opted to limit our selection to engines optimized for the legal environment, namely Autonomy iManage Universal Search (a/k/a “IUS”) and Recommind’s Decisiv Search. It is here that the investment of time in defining your business requirements really pays big dividends. In my previous posting, I noted that we had ranked our requirements by importance (from “Essential” down to “Nice to have.”) You can now use that list to create an Excel spreadsheet that you can use to compare your finalists.

We created five categories of rankings, with a score that ranged from 5 to 1. We then added another column that ranked the particular engine on how well it met the particular requirement, also on a five point scale. We then systematically went through each item in the business requirements document, assessed how well the particular feature performed, and assigned a score. Excel will calculate the weighted score for each item (so, for example, an “Essential” item that you give a score of 5 gets a weighted score of 25).

While there is no “ideal” minimum or maximum score that you are hoping to see at the end of this process, it’s possible that the ultimate scores are so low that you will have to reassess your entire process, but the likelihood of that is minimal. What you will most likely get is a total score for each of the finalists that enables you to engage a much more objective comparison than if you had just seen vendor demos of each.

You can also use Excel to compare your finalists just on their scores for the “Essential” items. (You may find, for example, that the overall result is fairly even between them, but one of them scores significantly higher when comparing just the “Essential” items. Once again, this is important information in helping you make your final decision.)

That is not, however, the end of the matter. Whether your finalists essentially get the same score (which is what happened in our case) or whether there is a clear winner, there are other non-quantifiable factors that you need to take into account, all of which can significantly influence your final decision.

The first factor is, of course, price. (You may in fact have taken this factor into consideration at the outset in determining which engines were, or were not, going to be tested.) Then, there are some factors that are likely relevant for any firm, as well as others that may be unique to your environment.

Among the common factors might be items such as:

  • What is your relationship with vendor? If you use other applications of this vendor’s, what is their history on responsiveness to issues you’ve raised about those other applications?
  • What are the announced upgrades for the next version of their engine, and what is their development roadmap? What process do they follow in determining which features to focus on for the future?

Aspects relating to your unique environment depend on the state of your current IT infrastructure and might include:

  • How well will this engine integrate with your current applications?
  • What repositories do you intend to index and what are the implications for integrating those different repositories?
  • What internal support requirements are there?

At the end of this process, you should have all the necessary elements for making a final decision, picking a “winner,” and then moving to the next state, namely the proof of concept.

Share