Case study: All in the mix

By | 26th June 2012

Many of you may have wondered what is actually involved in a project where lawyers decide to use some of the latest technology to help them and their clients make sense of a vast amount of data in a way which is relatively quick, straightforward, reliable and cost-effective?

Further, I suspect that many people will have read at least one of the numerous articles on predictive coding, aka technology/computer assisted review, over the past few months and wondered whether this was really something which was relevant to their everyday practising lives.

Advertising makes it relatively easy to assert that such and such a technology, this particular application or that particular tool will solve the problem in hand. It does not matter whether we are talking about washing powder, window cleaner or air freshener. Indeed all advertising relies on the same simple technique. Put in front of the viewer a pretty picture and some simple words which illustrate the good points of whatever is being promoted and sales will follow.

That at least is the theory. So what about the practice?

I make no apology for returning to the subject of technology assisted review because it is clear to me that, in the right circumstances, it is not only a game changer but an essential tool in the armoury of the litigation warrior. If you have kept abreast of the developments both here and in the US you will know that I am an advocate of the use of the technology where appropriate. Add to that the recent passing into law of LASPO (Legal Aid Sentencing and Punishment of Offenders Act 2012) and you have a powerful reason to consider the use of such technology. If you are not already aware of the need to consider the use of technology in the disclosure arena and you have not yet heard of cost management and budgeting for litigation, a visit to your favourite knowledge management website (such as the Smart e-Discovery Blog) and a swift perusal of what awaits when the changes promised for April 2013 take effect will repay dividends handsomely.

Last year we were involved in a major disclosure exercise with Eversheds during the course of which we recommended the use of Equivio Relevance to cull the apparently responsive documents down to a reasonable/manageable number. The subsequent case study may be viewed here – Case Study: Predicting the future of disclosure – and it has also been the subject of  a recent iPadio podcast between Dominic Lacey and Jamie Tanner of Eversheds, our own James Moeskops and Chris Dale of the e-Disclosure Information Project.

I was challenged recently by a lawyer who said that she had seen and heard much about computer assisted review but who wondered whether we or anyone else in the technology market were actually delivering real live examples of the exercise in practice. In other words, the marketing hype had, in her view, got ahead of the delivery of the product in practice.

Of course I was able to point her to the Eversheds case study, but the conversation started me thinking that we could and should be prepared to give more and practical examples of the many cases where we are delivering this technology on a daily basis to our law firm clients.

I have, therefore, selected just one example of a current project. I chose it because Emma Kettleton, one of Millnet’s most experienced project managers, is enthusiastic about it and because it is an ongoing project. This means, I hope, that there is the prospect of further updates as the case proceeds.

For obvious reasons I must be circumspect about the facts so as to guard against any breach of confidentiality. Necessarily, therefore, some of the detail which follows is sparse but will I hope give you an idea about how these things work and why everyone needs to understand what lies behind the technology and how it can help reduce the cost and make sense of even the largest collections of data.

The starting point for our purposes is that the parties were faced with in excess of 300,000 documents to review for potential disclosure. These documents were reduced by keyword searching to 153,680 documents which needed to be reviewed. In the context of the case, this was clearly a disproportionate exercise when you factored in the lawyer hours and the sheer amount of time involved in reviewing in excess of 150,000 documents.

Millnet, therefore, suggested the use of Relativity Assisted Review for two main reasons:

Firstly, the documents which were initially responsive to the keyword searches were mainly communications between the two parties, both of whom had equal authority to manage the business on a day by day basis. It became clear early on that most of these documents (153,680) contained largely irrelevant material relating to the daily operations of the company and not to the particular issue or agreement in dispute between the parties. As a result, keywords could not be used to eliminate or narrow sufficiently the collection of documents to be reviewed.

Secondly, the client wanted to “cull” the false positives (the keyword responsive but irrelevant documents) as quickly and as cost effectively as possible.

We found that out of the 153,680 keyword responsive documents, 128,873 were compatible with Assisted Review (because they contained text, were not spreadsheets or image files or those too large or small to be indexed).

We initiated an overall workflow as follows:

  1. Perform (run) Assisted Review
  2. Based on a stable set of results:
    1. Carve out the non-responsive documents and perform a manual spot check of those documents that were tagged by the computer (not a human) to further validate the results.
    2. Perform a full human review of these responsive documents for the purpose of checking for privilege, confirming relevance and ensuring a full understanding of the documents being provided to the other side.
    3. Manually review any documents not compatible with Assisted Review.
  3. The end result being to reduce the number of documents being required to be reviewed (whilst delivering the same result or a better result than if they had been reviewed by humans) and therefore the overall review costs.

The case is ongoing. To date, the senior lawyer in charge of the review process has reviewed just over 3,000 documents manually as a part of the Assisted Review project (after an initial sample round and a first round of Quality Control (“QC”)).

The results of the first QC round have been very promising with the following results:

Of 128,873 documents

  • 116,717 were marked as Non Responsive with an overturn rate of only 1.3% meaning that if we were to stop assisted review at this point there would be a risk that 1.3% of documents (in the region of 1,500 documents) which were responsive might be omitted from the documents to be disclosed.
  • 11,847 were marked as Responsive with an overturn rate of 42% (a substantially unstable figure). Regardless of the overturn rate, if the process were to stop here, the issue would be that the lawyers would be reviewing over 5,000 documents marked as responsive but which ultimately may not be.
  • 309 were marked as non-categorised (if these remain uncategorised later, they will require manual review.)

To sum up at this stage:

  • Had the client not adopted assisted review they would have had to review manually 153,680 documents.
  • Using Assisted Review it is likely that they will only have to review in the region of 40,000 documents (essentially those documents not compatible with assisted review, a sample spot check of the non-responsive documents and all responsive documents).

I plan to report further, if appropriate, as the case progresses. In the meantime, it is worth contemplating that even at this stage of the case the potential for savings in both time and cost is huge. To ignore assisted review would be disproportionate and unrealistic to say the least. You need to understand enough about the underlying power and relevance of the technology to know that you need the help of an expert who knows what he/she is doing and has experience of delivering the relevant workflow. The results may be awesome.

The trick is all in the mix!

This case study has been updated, see: All in the mix revisited