Introduction
After decades of development, clinical laboratories have achieved a low error rate in the analytical process outpacing other processes of the total testing process (TTP), focusing on analytical quality, with standardized procedures, an internal and external quality control assessment. International accreditation bodies require laboratories to control all testing processes, focusing not only on the analytical phases but also on the preanalytical and postanalytical phase where most errors occur. Improving the extra-analytical phase of the TTP is an important responsibility for laboratory medicine (1).
The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) “Laboratory Errors and Patient Safety” Working Group (WG-LEPS) launched a project in 2008 to define the Model of quality indicators (QIs). The overall goal of the project is to collect standardized data and create a common reporting system for clinical laboratories based on these data. In the first phase of the project, QIs were determined for the preanalytical, analytical, and postanalytical phases, which are the main components of the TTP. The QIs result reports that are collected from participating laboratories from February 2008 to December 2009 and the preliminary quality specifications determined according to these results were published in 2011 (2). In 2015, Plebani et al. reported very high priority preanalytical QIs (3). In 2016, postanalytical phase QIs specifications calculated according to 2012, 2013, and 2014 data were published (4). Finally, the preanalytical phase and postanalytical phase QIs specifications and estimated sigma values determined according to the data collected in 2014, 2015, and the first half of 2016 were reported (5).
Quality indicators are one of the main tools used to increase the quality of laboratory services, ensuring patient safety by reducing error rates. Quality indicators are recognized as part of laboratory improvement strategy and have proven to be suitable tools for improving and monitoring processes (4).
Six Sigma is a data-driven quality strategy that provides information about process performance and is used to improve processes. The quality assessment made by this method consists of “define”, “measure”, “analyse”, “improve”, and “control” steps. In the “measure” step of the process, the number of errors is converted to the number of defects per million opportunities (DPMO), and the process sigma level is calculated. The “Six” in Six Sigma refers to the ideal goal where six standard deviations can fit within the defined tolerance limits of a process and anything beyond these tolerance specifications is considered a defect. The evaluation of laboratory processes with the Six Sigma method not only reduces errors that may affect patient health but also contributes positively to the healthcare institution’s budget by preventing unnecessary costs. In addition, by calculating the laboratory performance with harmonised criteria, it is possible to compare the performance with other clinical laboratories in the world (6).
The aim of the study was to determine the current state of our laboratory’s extra-analytical phase performance by calculating the preanalytical and postanalytical phase QIs and sigma values and to compare the obtained data according to the quality specifications and sigma values reported by the IFCC WG-LEPS.
Materials and methods
This retrospective observational study was conducted in Hospital Central Laboratory in 2019. The data of the rejected samples in our laboratory are recorded through the “laboratory error classification system” software integrated into the laboratory information system (LIS). This software provides standardization of registration information of rejected samples. The total number of samples accepted to the laboratory, the number of rejected samples, the reasons for rejection, the total number of checked samples for haemolysis, and the total number of samples with anticoagulant checked for clots were obtained from the LIS. The haemolysis was detected by the haemolysis index of the Advia 1800 (Siemens Corp., New York, USA) autoanalyser. Clotted samples were detected by visual inspection of the specimen.
Rejected sample frequencies were calculated both monthly and for 2019. The target value for the “total preanalytical phase rejection frequency” was determined according to the average preanalytical rejection frequency values of the previous year in our laboratory (0.3%).
The rejected samples in the preanalytical phase were grouped according to the reasons for rejection. According to the data obtained, the preanalytical phase was evaluated with percentage of: “Number of samples not received / Total number of samples” (Pre-NotRec), “Number of samples collected in wrong container / Total number of samples” (Pre-WroCo), “Number of samples rejected due to haemolysis / Total number of checked samples for haemolysis” (Pre-HemR), “Number of samples clotted / Total number of samples with an anticoagulant checked for clots” (Pre-Clot), “Number of samples with insufficient sample volume / Total number of samples” (Pre-InsV), “Number of samples with inappropriate sample-anticoagulant volume ratio / Total number of samples with anticoagulant” (Pre-SaAnt), “Number of samples with excessive transportation time / Total number of samples” (Pre-ExcTim) (7).
The postanalytical phase was evaluated with the percentage of: “Number of reports delivered outside the specified time / Total number of reports” (Post-OutTime), “Number of critical values of inpatients notified after a consensually agreed time (from result validation to result communication to the clinician) / Total number of critical values of inpatients to communicate” (Post-InpCV), “Number of critical values of outpatients notified after a consensually agreed time (from result validation to result communication to the clinician) / Total number of critical values of outpatients to communicate” (Post-OutCV), turn around time (minutes) of: “Potassium (K) at 90th percentile (STAT)” (Post-PotTAT), “International Normalized Ratio (INR) value at 90th percentile (STAT)” (Post-INRTAT), “Troponin I (TnI) or Troponin T (TnT) at 90th percentile (STAT)” (Post-TnTAT), “time (from result validation to result communication to the clinician) to communicate critical values of inpatients (minutes)” (Post-InpCVT) and “outpatient (minutes)” (Post-OutCVT) (5).
Extra-analytical phase errors are assumed to be not normally distributed. Therefore, to avoid overestimating the deviation in the extra-analytical phase performance, it is recommended not to include the 1.5 standard deviation (SD) shift in the sigma calculation and to determine the DPMO values according to the short-term sigma table (6,8))). In the study, sigma values were determined according to the “short term sigma” table. Defects per million opportunities were calculated and converted to short term sigma (5,9). The preanalytical and postanalytical phase QIs values were calculated using the formulas recommended by the IFCC WG-LEPS (Table 1) (5,8). The calculated QIs and sigma values were evaluated, both monthly and for 2019, according to the desired (50th percentile) specifications reported by IFCC WG-LEPS (Table 1) (5,7). The sigma values were calculated according to the number of errors determined by accepting “the desired quality specification” as the target value.
The quality specifications of the “Post-InpCVT” and “Post-OutCVT” were not reported in IFCC WG-LEPS due to insufficient results (5). Our target values for “Post-InpCVT” and “Post-OutCVT” are 30 minutes.
Results
The number of samples received in the laboratory was 191,831 and the number of rejected samples was 643 in 2019.
Preanalytical phase: The total number of preanalytical phase errors in our laboratory was 432, the total number of checked samples for haemolysis was 130,188 and the total number of samples with an anticoagulant checked for clots was 53,504 in 2019. The total preanalytical phase rejection frequency was 0.22%. In December, the preanalytical phase rejection frequency was 0.33%. According to the reasons for rejection in 2019, “Pre-ExcTim” QIs and sigma value were 0.0036 and 5.47; “Pre-WroCo” QIs and sigma value were 0.02 and 5.11, respectively (Table 1). In December, “Pre-ExcTim” QIs and sigma value were 0.01 and 5.34; “Pre-WroCo” QIs and sigma value were 0.03 and 4.98, respectively (Table 2).
Postanalytical phase: “Post-OutTime” QIs and sigma value were 0.34 and 4.21; “Post-PotTAT” QIs (minute) and sigma value were 56 and 3.84, respectively. “Post-INRTAT”, Post-TnTAT, “Post-InpCVT and “Post-OutCVT QIs (minute) were 36, 52, 8 and 10, respectively (Table1). ”Post-TnTAT” QIs (minute) in January, February, March, June and November were above the desired annual target value (Table 3). There were no errors in “Post-InpCV” and “Post-OutCV” in 2019 and all months (Table 4).
Discussion
According to the data of our study, the “total pre-analytical phase errors” sigma value was 4.34 in 2019. In 2019, the “Pre-ExcTim” and “Pre-WroCo” QIs and sigma values were unacceptable according to the desired specifications reported by the IFCC WG-LEPS. When QIs and sigma values were calculated based on monthly data, “Pre-ExcTim” was unacceptable in February, March, August, December, and “Pre-WroCo” was unacceptable in months except for January, June and August according to the annual target value.
Document ISO 15189: 2012 recommends monitoring all critical aspects of the TTP and comparing it with data entered by different laboratories, taking into account all events that caused a particular error (10).
There is no monthly or annual target value for “total preanalytical phase rejection frequency” in the IFCC WG-LEPS (7). In our laboratory, we begin the preanalytical phase evaluation by comparing the monthly “total preanalytical phase rejection frequency” with the “average of total preanalytical phase rejection frequency of the previous year”. If the monthly total preanalytical phase rejection frequency is higher than the average of the previous year, we group them according to the reasons for rejection, then evaluate the QIs and sigma values according to the annual desired target values reported by IFCC WG-LEPS. This approach, in which we evaluate our preanalytical phase data monthly before making the annual evaluation, provides us with an early intervention opportunity for error sources. It also prevents the cumulative accumulation of errors. In December, the “total preanalytical rejection frequency” was higher than “the average preanalytical rejection frequency value of the previous year of our laboratory”. When we evaluated December’s data according to the reasons for rejection, “Pre-ExcTim” and “Pre-WroCo” were unacceptable according to the annual target value.
In the postanalytical process evaluation, the “Post-OutTime” QIs and sigma values were unacceptable. Based on monthly data, the “Post-OutTime” QIs value was unacceptable in all months, the “Post-OutTime” sigma values were unacceptable in March, April, June, August, September and December according to the annual desired target value. When we examined our data in detail for the implementation of the regulatory preventive action, we saw that the reports delivered outside the specified time were clustered on certain days. It was determined that the number of reports delivered outside the specified period increased due to the device failure in March and April, and the need for extra maintenance during June, August, September, and December.
Shewhart divides the source of variability in processes into two groups as general (chance causes, common causes) and special (assignable causes, special causes) reasons. While general causes are always emerging and predictable, specific causes occur in few numbers and have significant effects on their own (11). Device breakdown and the need for extra device maintenance are special sources of variation (11,12). In laboratories with more than one auto-analyser, the reporting times do not change significantly during a device failure or device maintenance, as the tests can be analysed with another auto-analyser that functions. However, for laboratories that have only one autoanalyser, device malfunctions and unforeseen maintenance requirements are important time-related error sources. For this reason, it may be beneficial to present the data obtained in the studies for the harmonization of quality specifications according to subgroups by considering the capacities of the laboratories or the number of devices.
Our “Post-PotTAT” QIs value was unacceptable. When the data were evaluated monthly, the “Post-PotTAT” QIs in all months were unacceptable relative to the annual target value. The “Post-TnTAT” QIs value was acceptable relative to the target value. However, when evaluated monthly, “Post-TnTAT” in January, February, March, June, and November was unacceptable compared to the annual target. The sigma values of the “Post-PotTAT”, “Post-INRTAT” and “Post-TnTAT” are not determined in the IFCC WG-LEPS report, because they could not be expressed as a percentage (5).
The maximum time target for critical value notifying is 30 minutes in our laboratory. With the software we added to our LIS, the system sends an information message to the users’ mobile phones when there is a critical value in the report. This software has prevented errors for the “Post-InpCV” and the “Post-Out CV”.
In conclusion, in this study, in which we evaluated the extra-analytical phase of our laboratory, the “Pre-ExcTim”, the “Pre-WroCo” and the “Post-PotTAT” QIs were unacceptable. Laboratory medicine will become a safer diagnostic discipline in health through error reduction strategies that are a routine part of quality management programs implemented in clinical laboratories. In this direction, we think that the approach followed in our study, in which the extra-analytical phase is evaluated by comparing it with the latest quality specifications and sigma values published by IFCC WG-LEPS will contribute to the improving the quality of laboratory medicine.