DNB CET Scoring & Results: 3 Tips and insight for DNB July 2016 Exam

DNB CET Scoring & Results: 3 Tips and insight for DNB July 2016 Exam

300-MCQ, multiple question papers are used for different sessions and days between 1st and 4th July 2015. A process of linking, equating and scaling is used to score for the final results.

Linking: When DNB CET tests are purposefully built to be different (e.g. 8 different exam papers in 4 days), there will me major differences in their difficulty levels and content. A linking process is conducted to establish a relationship between the different test scores. When DNB-CET tests contain very different content, linking will not be adequate for all purposes of scoring.

Equating: Equating various DNB CET exams is a way of specific linking. It is a process used to make different DNB CET tests conducted over 4 days interchangeable. Equating adjusts the difficulty levels of each exams. After equating score forms can be used interchangeably.

Scaling: Scaling is process of changing the raw scores into a different scale. If a candidate receives a raw score of 205 in DNB CET, it is impossible to know how well he or she did without knowing the total number of points of other candidates. Item Response Theory (IRT) is used to scale the results. Rather than relating scores on two DNB exams, scaling transforms scores on one test to a different metric that is more easily interpreted and understood. At this stage there is only only 1 score for everyone.
Why is IRT used for DNB CET?

GateToMedicine interviewed experts on exam results measurements to simplify the process for the better understanding of candidates. The IRT models shows a probabilistic relationship between the candidates response on a test and some latent trait such as reading ability or some personality trait e.g. guessing.

What do you need to know?

There are 4 parameters in the IRT scaling process that is important to know. Parameters a, b, q and c.

Item Difficulty (parameter b): This this the point where the candidate has a probability of getting the answer correct. Candidate’s ability parameter ‘q’ correlates to this. A candidate with the same ability as the question difficulty will have a 50% chance of getting it correct. People with higher ability will have a higher chance. Hence a b parameter of ‘0’ means average difficulty. Scores of ‘b’ parameter ranges between -3 to +3. Here ability and difficulty levels are assessed. This process is similar to the one used on G2M genius.

Item discrimination (parameter a): In simple terms this determines how much the question or item contributes to a student’s ability level. E.g. a new question added for the exam can be rated 3 and a frequently repeated question could be rated 1. This means that higher the item is rated, candidates ability is scored higher. Hence a continuous supply of new questions from important topics is important in high scores.

Pseudo-guessing parameter (parameter c): To is to check the probability of a person with a very low ability getting a correct response to the MCQ. This identifies whether guessing was involved in the response. And questions or items with his ‘c’ score are rated low in ability assessment.