Guest Post: Analyzing the results of the Patent Agent examination

Rajiv Kr. Choudhry, an Indian IP attorney, has come forth with a brilliant statistical analysis of the results of the patent agent examination. Rajiv has an LL.M. in IP law at the George Washington University Law School, LL.B. from the University of Delhi, Law Center-II, MBA from FORE School of Management and B.E. in Electronics from Nagpur University. Before joining law, he worked at ST Microelectronics as a project co-ordinator.

A chart containing his analysis can be accessed over here. A copy of the same is available on the SpicyIP.com website.

Guest Post: Analyzing the results of the Patent Agent examination

by Rajiv K. Choudhry

This post was prompted by the comments received on the post for results of the patent agent examination 2010.

A few words about the data analysis in the accompanying chart:
The patent office stated that the number of people who appeared for the exam was 1019. However, the data when completely collated showed 1018 people who took the exam (1274 registered and 256 absent). However, the figure taken by the patent office is assumed to be correct.

The pass percentages have been provided for each city with and without the absentees included. An average of marks, a standard deviation measure, and maximum and minimum marks in each part of the exam have also been included.

A measure of standard deviation (stddev) was also performed in order to see how much variation there is from the average scores. A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data are spread out over a large range of values. As such, low standard deviation would indicate uniformity and a large standard deviation would indicate a disparity.

The average score for paper 1 and paper 2 in all 4 centers is remarkably similar-around 51 and 45 respectively. This may reflect to the objectivity of the question papers and a similar understanding of the questions amongst test takers. This is also evidenced from the standard deviation. The stddev for scores in Paper-1 and Paper-2 in Delhi, Chennai and Mumbai is very similar-14.5.

The standard deviation for viva is entirely a differnt matter for Mumabi, Chennai and Delhi. The standard deviation for viva should also be similar to that of paper-1 and paper-2. Statiscally, this is an anomaly and might reflect the subjectivity of the viva portion of the exam. The case of Kolkata highlights this portion-the stddev for viva and paper-1 and paper-2 is similar.
The parameter stddev is even more remarkable when considered for individual cases: it is seen that even though a candidates personal score in viva is the highest, they have failed overall (low marks in paper-1 and paper-2).

Another reason why the stddev is important is to see what is a more accurate reflection on the final result-paper-1/2 marks or the marks in viva. The average marks in Delhi for the passing candidate in paper 1, paper 2 and viva are 64, 62 and 70 respectively. However, for the candidates who failed in Delhi, the average marks in paper 1, paper 2 and viva are 47, 44, and 54 respectively. There is a strong co-relation between paper-1, paper 2 marks and pass and fail but there is a weak/very weak correlation between the viva marks and the overall result.

A complete gender based analysis was not performed but a small sample of the passing candidates in Delhi indicates that every 3 of the males who passed, 4 of the fairer sex qualified. To re-iterate, this a result from the small sample and not the overall data. This may also be biased because of assigning male and female name categories while seeing the results by the analyst. This result is nothing new: females out-perform males in most competetive examinations in the country. :-).

The gender of the Examiner is immaterial because the end result (pass/fail) is dependent upon paper-1 and paper-2 marks. An Examiner may boost the final result of the candidate by a few marks but that is the nature of an interview. Overall viva is not the deciding factor.

It is my personal opinion that because a major part of the communication between a patent agent and the patent office is written, the exam should also be a written exam where the patent office procedures are tested. This would be a true ‘objective’ exam.

Hopefully this post will generate a debate about the need for a viva in the patent office exam.
Tags:

6 thoughts on “Guest Post: Analyzing the results of the Patent Agent examination”

  1. A real encouraging statistical analysis indeed of the entire data population creating the subject matter of debates .. Guess is quite late to comment on this topic as I did on the earlier one, but still better late than never, so I copied the same in this one to get better limelight.. Well the viva issue has been widely debated upon as I saw on the comment thread of the earler topic.. but I doubt anyone prominently focussed that the majority failures is because of flunking in Paper II .. since PA is not supposedly a competitive exam with elimination as not its primary concern I personally feel that Paper II has some serious technical problems that need to be brought to notice ..

    1. The paper is very subjective and low scoring pressing the death nail on candidates who has otherwise scored excellently

    2. Its awfully lengthy as most would agree as even the IPO as they extended the exam time this year

    3. A major focus of the paper is in Drafting in which one good draft is enough to consume your whole exam time although the IPO differs by saying they are just satisfied with the structure

    4. This time they had given two drafts technically ( although most would differ but I presume even writing a good Abstract of an invention takes quite some time)

    5. It includes Case studies adding to the pain with time management in such paper as individual cases require elaborate explaining in order to comprise a better answer

    6. The paper is very relative where everybody may have different ways of answering which cannot be evaluated as one is right and the other is completely wrong where in the IPO finally sets the law of the land ousting all other views

    Well this is all I could remember an jot down. Further is there any set rules of evaluation or it changes every year? Last year everybody who scored 45+ but < than 50 were given a grace which pulled many through but is not given this year. Lastly to conclude this comment should not be held to be criticizing IPO but a request to get more structured and transparent in order to avoid all such debates

  2. Debabrata,

    Any back up/ data/ link/ source on the statement that for the last year, people at 45 were moved to 50???

  3. apart from 3 scorers of 46 .. there are many passing candidates who possessed the marginal score of 50 .. it was my presumption made on analysing the data about the grace thing ..

  4. Well presumption would also imply that last year Kolkata was the one which gave out grace marks while this year the Kolkata center has probably the least candidates passing the exam.

    @debabrata I don’t think it is fair to pass such judgements based on gut feel although you might be right. It will not be a fair comment. It might even give rise to libel.

    Anon.

  5. @ Anon .. true, presumptions cant be relied upon .. I take the statement back .. further I disclaimed any malicious intent to generate a libel in my concluding remark 🙂

    but, I just still checked if anyone had similar presumption or could cite an authority to support the same based on the previous year’s results or may be a statistical analysis of results of previous year as in the above topic

Leave a Comment

Discover more from SpicyIP

Subscribe now to keep reading and get access to the full archive.

Continue reading

Scroll to Top