Kappa statistics are often used to test the reliability of interreters. The importance of the reliability of reference values lies in the fact that it represents the extent to which the data collected in the study are correct representations of the measured variables. The measurement of the extent to which data collectors assign the same score to the same variables is called the reliability of the interrater. Although there were many methods for measuring the reliability of Interraters, they were traditionally measured as a percentage of agreement, calculated as the number of chord results divided by the total number of points. In 1960, Jacob Cohen criticized the use of the agreement as a percentage because of its inability to take random agreement into account. He introduced the Cohen-Kappa, which was designed to take into account the possibility that the spleens, due to uncertainty, guessed at least a few variables. Like most correlation statistics, the kappa can be between 1 and 1. While the Kappa is one of the most used statistics to test the reliability of interramas, it has limitations. Judgments about the level of Kappa that should be acceptable for health research are questioned.

Cohen`s proposed interpretation may be too lenient for health-related studies, as it implies that a value of up to 0.41 might be acceptable. Kappa and approval percentage are compared, and levels for Kappa and percentage approval that should be requested in health studies. By comparing two methods of measurement, it is interesting not only to estimate both the bias and the limits of the agreement between the two methods (interdeccis agreement), but also to evaluate these characteristics for each method itself. It is quite possible that the agreement between two methods is bad simply because one method has broad convergence limits, while the other is narrow. In this case, the method with narrow limits of compliance would be statistically superior, while practical or other considerations could alter that assessment. In any event, what represents narrow or broad boundaries of the agreement or a large or small bias is a practical assessment. Kappa is an index that takes into account the agreement observed with regard to a basic agreement. However, investigators must carefully consider whether Kappa`s core agreement is relevant to the research issue. Kappa`s baseline is often called random tuning, which is only partially correct. The basic agreement of Kappa is the agreement that could be expected because of the accidental allocation, given the quantities declared in quantity in the limit amounts of the square emergency table.