In 2010, I developed a unique method of comparing bar exam essay and MPT answers to help examinees better understand how to write a passing essay/MPT. These MEE/MPT Essay Comparison Banks are essentially databases of graded examinee MEE and MPT answers that can be compared to one another. I find that these Comparison Banks are invaluable in helping examinees quickly and efficiently discover “what works” versus “what doesn’t work” on the written component of the exam. For example, an examinee that recently passed told me: “I think this helped me immensely, because although I had not practiced writing any essays, I still really got a feel for the tone, length, content and structure of passing answers which created a ‘voice’ in my head when writing essays.” These Comparison Banks compare actual graded MEE and MPT answers to one another and to the model answers so you can better understand what a passing answer consists of (currently based on the past 16 administrations of the MPT and 3 administrations of the MEE). You can view poor essays, exactly passing essays, or high scoring essays, and compare any essay to another side by side. Comparing real essays will show you things you may frequently miss and remind you that there is only a finite number of issues passing essays actually cover. Examinees that only look at model/released answers frequently develop unrealistic expectations of what is required for a passing answer, whereas seeing that even better scoring essays contain mistakes will help you develop realistic expectations of the bare minimum required to pass an essay/MPT. In order for essay grader to be relatively consistent with one another, the graders must convene to “calibrate.” In this calibration process, the graders look at a sample of examinee essays and rank-order them until all the graders are in agreement as to which essay can be regarded as a template of a superior answer and which essays are lowed-ranked templates. Thus, the more you look at essays that the bar graders themselves regarded as superior, the more likely you will emulate those essays and achieve superior essay scores yourself.

I like to think of my materials as gap-fillers that dove-tail with the big bar review materials and this is a perfect example. No where else can you look at a number of exactly passing answers to see how much (or how little) is required for an exactly passing score. Furthermore, subscribers can review examples of the high scoring MEE essays to: (1) see how the high scoring essays are structured (e.g. how they use CIRAC/IRAC, how they address the issues, how they format their answer in regards to issue statements, conclusions and bolding/underlining/italicizing); and (2) see how the high scoring essays properly analyze the issues. When you look at the text comparison, you start to see the commonality in language with high scoring answers – in a sense this trains you to include the same language in similar essays. The PDF comparisons (where you view the actual PDFs of the answers side by side) let you see each essay’s layout, structure, and formatting so you can visually learn how to emulate the high scoring answers (and conversely, avoid the styles of the low scoring answers). For example, one examinee (non-subscriber) told me: “I did much better on my essays this time due in large part to your comparison tool. I found that to be extremely helpful.” For that exam, the examinee’s essay average was ranked 9/196 (this means the examinee had an essay average better than 96% of the examinees that sent me their score information). For the prior exam, the examinee had an essay average of 131/315 (which was better than 58% of the examinees that sent me their scores). I find that many underperforming candidates benefit greatly from this creative approach to examining/comparing graded MEE/MPT answers.

The New York legislature itself has said that “Disclosure of past testing materials and applicant examinations allow prospective attorneys to become aware of testing subject matter and methodology so that otherwise qualified attorneys are not defeated in their attempts to pass the bar examination.” By comparing and contrasting MEE/MPT answers, you are doing what any good attorney should be doing. Put simply, the more you learn about what comprises a good written answer, the better you will do on future answers. Examinees that self-evaluate can write an answer to a question in the MEE or MPT Comparisons and then compare their answer to other graded answers. If this is too much effort, you can simply look at passing and above-passing MEEs and take mental notes. Put simply, good essays look like other good essays. The best way to understand the benefits of using an MEE or MPT Comparison Bank is to see it for yourself. Following are links to small samples of the February and July 2010 MPT comparisons:



In the above sample MPT Comparison Banks, each MPT is compared to every other MPT. While the above samples consist of about 10 MPTs, the subscription based MPT Comparison bank consists of over 700+ graded MPTs spanning the past 16 MPT administrations. The Comparison shows the MPTs in score order – the lowest scoring MPT comparisons are at the top and the highest scoring MPT comparisons are at the bottom. Each Comparison examines a range of graded MPTs from an administration (both handwritten and typed) and looks through them for matching words in phrases (minimum of 2 words). The reports contain the document text with the matching phrases underlined (in red or green depending on the context – more on this in the detailed explanation below). The reports also show PDFs of the two MPTs you selected side-by-side. Examinees learn by example – reviewing a collection of graded MPTs helps you better understand the MPT. Bottom line, good MPTs look like other good MPTs.

Please note that the PDFs for these Comparison Banks need to be viewed on a computer. Devices such as phone will not display them (aside from the inefficiency of trying to use this Comparison on device with a small screen). An explanatory tutorial video of the MEE/MPT Comparison Banks is below:



In a March 2014 article entitled "It Should Be About Feedback and Revision" by J. Elizabeth Clark, an English professor at LaGuardia Community College who wrote that "[h]igh-stakes essay writing is about learning to game the system. Good test takers are just that: Students who learned the rules of the game, often through expensive test-prep courses that disadvantage poor and at-risk students. Those with greater access to coaches and materials and practice do better on the exam, but that does not mean they are better writers." This MPT Comparison is an excellent way for examinees to learn "the rules of the game." Put simply, there is no other resource available that enables examinees to compare and contrast hundreds of graded MPTs. Please use it to your advantage, especially if you do not have the time to practice many MPTs.

In order to define the "rules of the game," it is necessary to examine automated grading. Automated grading software essentially sets the "rules of the game" by defining grading parameters and then scoring accordingly. For example, E-rater is an automated grader developed by ETS, the largest educational testing company in the United States (and the writer of the MBE). According to a 2001 paper on E-rater, “ … e-rater also examines an essay’s content -- its prompt specific vocabulary -- both argument-by-argument and for the essay as a whole. The expectation is that words used in better essays will bear a greater resemblance to those used in other good essays than to those used in weak ones, and that the vocabulary of weak essays will more closely resemble words in other weak essays. Programmed on these assumptions, e-rater assigns a number to each essay on the basis of the similarity of its vocabulary to that of other previously scored essays.” see Stumping E-Rater Challenging the Validity of Automated Essay Scoring, Powers (2001). While the MPT component is currently graded by human graders, it stands to reason that the criteria relied on by automated graders is for the most part the same criteria relied on by human graders.

Accordingly, examinees taking July exams should look at the MPTs from July exams while examinees taking February exams should look at the MPTs from February exams to get a better understanding of the components of a passing MPT for the upcoming exam. Comparing and contrasting MPTs in the MPT Comparison will give you a much better idea of how the MPTs are graded and how to compose a passing MPT. Examinees should review the high scoring MPTs to pick up on good vocabulary and should examine the low scoring MPTs to identify and avoid bad vocabulary. As the Stumping E-Rater paper explains: "words used in better essays will bear a greater resemblance to those used in other good essays than to those used in weak ones."

However, while good essay terminology and linguistic cues can improve your score, "essay length is the single most important objectively calculated variable in predicting human holistic scores". see Automated Essay Scoring With E-rater V.2.0, Attali (2004). Basically, the length of an essay was the most reliable indicator of an essay's score. I personally believe that one day the grading of bar exam essays will be automated as there are too many problems with the human holistic method of grading essays and MPTs. However, whether the graders are humans or machines, good MPTs look like other good MPTs. As such, use this MPT Comparison to learn the organization, terminology, linguistics and writing style of these good essays.

This MPT Comparison consists of 600+ graded MPTs spanning the last 14 exams (from February 2010 to July 2016). I feel this analysis is invaluable for examinees to discover "what works" versus "what doesn't work." For example, you can use the Text comparison to compare high scoring MPTs to the released above average answers to see what phrases both essays shared. By comparing your MPT to MPTs that out-scored you, you can hopefully pick up ideas on style/headings/analysis/etc. You want to mimic these higher scoring answers. I am in the process of examining the essays to see if I can determine (with reasonable confidence) if anything other than an essay's content has a bearing on the essay's score. For example, does penmanship, neatness, headings, or mis-spellings have any effect on an examinee's final score?

Some of the benefits of this MPT comparison are:

• You can compare an MPT to any of the other MPTs or the question and above average answers. For example, you can compare the best MPT to the worst MPT, or compare MPTs with similar scores (and sometimes exactly the same score). •You will see exactly what the bar graders consider to be a passing essay. You can then compare your essay to passing essays (for example, above a 50.44 scaled score on the February 2010 exam or above a 46.81 scaled score on the July 2010 exam). • You can learn how much (or how little) is written for high scoring MPTs. • In the Text comparison, you can see exactly what words an MPT used that the comparison MPT also used. • In the PDF comparison, you can compare the style, layout, penmanship, neatness of MPTs. • You can compare handwritten essays to typed MPTs. • You can compare your MPT to the best MPTs from other states (starting with the July 2014 comparison).

To maintain the anonymity of the examinees, all identifying information has been redacted and each examinee is assigned a random 3-digit ID. If your browser is Internet Explorer or Firefox, if you type Ctrl+F and then search based on an ID, each instance will be highlighted (in Firefox, you must press the "Highlight All" button). In both browsers, you can choose to have new windows open in Tabs (under Internet Options in Internet Explorer or Options in Firefox). This will prevent new windows from being created and keeping all the comparisons in organized tabs. Checking all the answers from an examinee is sueful in seeing how the examinee approached the exam as a whole rather than based on a single answer. In the MPT Comparison, each and every essay is compared to every other essay. There are three columns on the report - a "Matching Words" column, a "Text Comparison" column, and a "PDF Comparison" column. The "Matching Words" column reports the number of perfectly matching words (minimum of two words) that have been marked in the pair of documents. It includes too-short phrases that require bridging over non-matching words in order to count as matching between the two documents (e.g. a match would occur if one document said "New York constitution" and the other document said "New York State constitution"). In such cases, the bridging word will appear in green. Each "Matching Words" row item has 3 subparts: (a) the number of matching words; (b) what percentage of the first document is accounted for by these matching words; and (c) what percentage of the second document is accounted for by these matching words. For example, in regards to the following row:

Matching Words

MPT Text comparison

MPT PDF comparison

123 [16%,34%] Feb2014-Essay-MPT-Score 65.62-Wrote-ID 003 vs. Feb2014-Essay-MPT-Score 25.19-Typed-ID 023 Feb2014-Essay-MPT-Score 65.62-Wrote-ID 003 vs. Feb2014-Essay-MPT-Score 25.19-Typed-ID 023
This means that there are 123 word matches (minimum of two words) between the two documents being compared on that row. The first document is an answer to the MPT from the Feb 2014 exam that was typed by Examinee ID #003 and received a score of 65.62. The second document is an answer to the MPT from the Feb 2014 exam that was typed by Examinee ID #023 and received a score of 25.19. The 16% means that 16% of the first document (the 65.62 MPT) consists of words that match the second document (the 25.19 MPT). The 34% means that 34% of the second document (the 25.19 MPT) consists of words that match the first document (the 65.62 MPT). Finding a high level of commonality in the matching words of passing MPTs can help examinees develop a better vocabulary for those issues. As I often say, good MPTs look like other good MPTs. The "Text Comparison" column shows the text matches between the two MPTs you select. In the reports, perfect matches are indicated by red-underlined words and bridging, but non-matching words are indicated by green-italicized-underlined words. The matching phrases are links. If you click on a matching phrase, you will be taken to the equivalent phrase in the other document of the pair.

The "PDF Comparison" column shows the PDFs of the two essays you select side-by-side.

In the tables, the hyperlink naming convention operates as follows: Exam-Essay-Score-Typed/Written-ID

For example, the naming convention "Jul2012-MPT1-Score 33.92-Typed-ID 052" means that this is an MPT from the July 2012 exam, the scaled score of the essay was 33.92, the examinee typed the essay, and the randomly generated ID of the candidate was 052. You can use the ID to differentiate examinees in instances where multiple examinees have the same score on an MPT.

In a few instances, the Typed/Written status of an essay is "Typed Edited." This means that the MPT is a typed MPT, but it is not in it's original format because the examinee edited it. For these MPTs, you must keep in mind that what you are seeing may not be exactly what the bar grader saw in regards to layout or format. For example, in the July 2010 Analysis, IDs 048 and 051 were "Typed Edited."

If the MPT is an above average answer released by NY BOLE, in place of an ID, the following appears: NYBOLE-Released Answer 1 NYBOLE-Released Answer 2

Please note that these released above average answers do not have scores, these essays likely received scores between 80-85. A recent enhancement to this MPT comparison is that I now include MPTs from other jurisdictions. For example, if another state uses the same MPT for their exam and releases above average exemplars, I include these MPTs in the MPT comparison (e.g. Arkansas is AR1, Georgia is GA1-GA3, Indiana is IN1, Maryland is MD1-MD2, Minnesota is MN1, Ohio is OH1, and Texas is TX1-TX4). I estimate the scores for these MPTs based on the top NY MPT scores and then adjust based on the size of the pool of examinees (e.g. the pool of TX MPTs is 27% the size of NY's pool of MPTs while the other states are even smaller - MD 13%, GA 12%; OH 11%; and MN 5% and AR 2%). Accordingly, the best MPT from a smaller jurisdiction would probably earn a lower score than the best MPT from a larger pool such as New York.

The reason essays/MPTs are released by NY BOLE is so examinees can identify deficiencies in their essays/MPTs. In a 1995 bill to bill to amend the Judiciary Law, the bill stated that it is in New York State's "best interest to insure that all bar applicants are given an equal opportunity to pass the NYS Bar Examination. Disclosure of past testing materials and applicant examinations allow prospective attorneys to become aware of testing subject matter and methodology so that otherwise qualified attorneys are not defeated in their attempts to pass the bar examination." A copy of the bill is here. By comparing and contrasting MPTs, you are doing what any good attorney should be doing. The more you learn about what comprises a good MPT, the better you will do on future MPTs.

I recommend that examinees use Firefox to browse the MPT Comparison. Using Firefox, examinees can install the add-on Snap Links Plus. This add-on enables examinees to select a number of links with a rectangle and open them all in new tabs. Accordingly, examinees can select all the links of a high scoring essay and then open each comparison in a separate tab in Firefox. The examinee can then press CTRL+TAB to go from tab to tab. This ends up being a very efficient way to review the essays. When an examinee is done reviewing, the examinee should go to the main tab, right click, choose "Close Other Tabs" and then start again with another high scoring essay.

A clever person learns from his own mistakes, but a wise person learns from others’ mistakes. With these Comparison Banks, you can not only learn from the mistakes of others, but you can also learn from their achievements (so that you can avoid making your own mistakes).