In metrology systems, machine vision systems are often utilized for non-contact inspection. The most important phase in ensuring measurement accuracy is camera calibration and estimation of pixel measurement errors, which establish the correspondence between image coordinates and object coordinates. Multiple calibration techniques improve the effectiveness of machine vision systems. However, a number of factors lead to variations in the camera calibration procedure, which must be optimized. This study explains a novel 'Cyclic-Lead-Follower' statistical methodology proposed for camera calibration and measurement to estimate the errors in pixel measurement, employing four slip gauges for measurement. Several multi-criteria decision-making techniques, including WSM, WPM, WASPAS, and TOPSIS, were used to optimize the results of the proposed Cyclic-Lead-Follower methods. The proposed Cyclic-Lead-Follower method improves the accuracy of the camera calibration and measurement system, which employs the exponential moving average statistical method when compared to the traditional calibration method. The proposed calibration method produces lower exponential moving average values than the traditional calibration method, with an average percentage error of approximately 46% in the exponential moving average. The use of an exponential moving average in a validation experiment of the Cyclic-Lead-Follower method decreased the measurement percentage errors, which were estimated to be 0.57%. The proposed method can be used in machine vision systems due to its robustness, accuracy, and cost-effectiveness.
1.Introduction
The first step in any machine vision system is to capture an image of the object using a camera. From there, the image goes through a series of processing steps to get the final result. To get dimensional information from a captured image, it is essential to establish a relationship between an object and the obtained image. Establishing the relationship between an object and its image is known as camera calibration. The metrology field relies on machine vision systems for non-contact inspection. Camera calibration is thus the most important step in determining the accuracy of measurements. For camera calibration, every object to be measured is defined in a 3D representation in the 'World Coordinate System (WCS)', whereas the obtained image is defined in a 2D representation in the 'Image Coordinate System (ICS)'. The relation between the ICS and the WCS is established through mathematical approaches.
There have been several studies of different approaches to camera calibration, including 'self-camera calibration', 'camera calibration by active vision', 'linear and non-linear methods', 'traditional camera calibration using a calibration piece', and so on [1–3]. Extensive research and development into camera calibration have led to its adoption in domains as diverse as 'machine vision', 'biomedicine', 'visual surveillance', and 'mobile robot navigation' [3]. As previously mentioned, the object is represented in 3D while the image is represented in 2D. However, the effect on calibration is produced by different parameters. While intrinsic parameters affect a 2D image representation, extrinsic parameters affect a 3D representation of the object. Notable research areas are machine vision systems and camera calibration. A discussion of a few of them is given below. For color calibration, Menesatti et al [4] employed the 'Thin-Plate Spline Interpolation' method. They evaluated the thin-plate spline interpolation algorithm for color calibration in comparison to a commercial calibration system and partial least squares analysis. The results showed the Thin-Plate Spline method's high efficiency, which has the potential to reshape the area of color quantification in the food sciences. Deng et al [5] described a camera calibration model that incorporates geometric parameters and lens distortion effects. The suggested algorithm improves performance in visual identification tasks by correctly optimizing camera settings using 'particle swarm optimization' and 'differential evolution'. Lee et al [6] proposed an efficient, position-dependent, and independently applied each sub-display camera-based technique for color calibration on tiled display systems. The method put forth can reduce both the spatial non-uniformity within each sub-display and the non-uniformity in the differences in color and luminance across sub-displays. To streamline extrinsic calibration efforts and increase measurement precision for shape data, Chen et al [7] proposed an improved calibration technique for dual camera-one projector pairs that was verified through experimental findings. To inspect the 'Printed Circuit Boards (PCB)', Heinemann et al [8] used 'ultra-close range normal case photogrammetry', which involved taking the picture with a close-up camera. 'Spacer rings' and 'extension tubes' were used to shorten the distance and increase the magnification during calibration. The suggested method was found to be effective for up-close inspection of the items. Korkalo et al [9] demonstrated an auto-calibration technique for a network of embedded depth cameras used for tracking people. Observations were used to determine the topology of the sensor network and the initial transformations. The technique then refines the parameters using global optimization and flexible transformations, increasing accuracy and precision. The accuracy of the method was examined using real-world data sets, and the results were satisfactory. For multi-spectral cameras, Wang et al [10] proposed a new color calibration technique that takes into consideration the impact of a light source on the output values of the camera. In comparison to conventional methods, the method greatly lowers color calibration errors by normalizing the 'Red, Green, and Blue (RGB)' values and creating a calibration model in chromaticity space. Schweikert et al [11] described a calibration technique that reduces sample motion error and achieves measurement uncertainty for pixel-wise in situ calibration for high-accuracy thermography of moving targets with locally varying emissivity. To enhance camera calibration for high-precision measurement, Lü et al [12] discussed several techniques, which include 'adaptive gamma correction', 'sub-pixel corner extraction', 'adaptive weight', and 'mutation particle swarm optimization'. They observed more precise camera parameters with an 'average calibration error' of 0.038 pixels.
The literature review mentioned above provides an overview of the various camera calibration techniques and studies. Various techniques have been employed in recent years to optimize the outcomes, but 'Multi Criteria Decision Making (MCDM)' techniques have proven to be effective and are widely used. Here, a few of them are discussed. Zavadskas et al [13] studied a few MCDM techniques developed in recent years. Additionally, a thorough investigation of the hybrid methodology used in various MCDM methods is required. To test the robustness and reliability of the MCDM methods, Maliene et al [14] performed sensitivity analysis on five MCDM techniques, including the 'Weighted Product Model (WPM)', 'Weighted Sum Model (WSM)', 'Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)', and 'Complex Proportional Assessment (COPRAS)', as well as the 'revised Analytic Hierarchy Process (rAHP)'. Results show that the variation in how the alternatives' relative importance is rated has a direct impact on ranking uncertainty. For web service selection, Serrai et al [15] used 'Simple Additive Weighting (SAW)', 'Vise Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR)', TOPSIS, and WPM where the 'Analytic Hierarchy Process (AHP)' method was used to compute the weights. The authors created a new normalization technique called the 'Optimized Method of Reference Ideal (OMRI)' to normalize data by value constraints. To verify the findings, the authors conducted numerous simulations on a real dataset. Rana and Patel [16] used WPM and TOPSIS methods to identify the best location for a small hydropower project. The AHP technique was used to assess the consistency of the decision matrix. The WPM and TOPSIS methods were used for the ranking, and the experiment's outcomes were satisfactory. Yildiz [17] experimentally studied the effect of electrical parameters on electrode wear ratio and material removal rate in mesoscale electrical discharge drilling. The best machining outcome was determined using VIKOR, TOPSIS, and 'Grey Relation Analysis (GRA)' techniques. Inconsistencies were eliminated by ranking the performance scores using the weighted sum and weighted product methods. Kabassi [18] applied four MCDM methods (SAW, WPM, TOPSIS, and 'Preference Ranking Organization Method for Enrichment Evaluation II (PROMETHEE II)' to evaluate environmental education programs and compare their performance. According to the sensitivity analysis, the TOPSIS method was more sensitive, and the SAW method was more robust. To choose the finest indoor environment among six apartments, Zavadskas et al [19] used the 'Weighted Aggregated Sum Product Assessment Method (WASPAS)' method in conjunction with a method of 'Multi-Attribute Decision Making' using an optimal alternative. The experimental results demonstrated that the WASPAS method can be quickly used to determine the best options, which enhances decision-making. The experimental findings of Abiola and Oke [20] demonstrated that the Taguchi technique and MCDM methods can be combined. They used the Taguchi method together with WSM, WPM, and WASPAS to minimize downtime in a manufacturing process. Based on the results of the experiment, they suggested using the WASPAS technique. To optimize the 'friction stir spot welding' parameters of Al 6061-T6 incorporated with silicon carbide, Chaudhary et al [21] used a hybrid WASPAS-Taguchi technique. Increased strength and hardness of the welded joint were the desired outcomes. The hybrid WASPAS-Taguchi technique was effective, according to the experimental results.
A review of the literature demonstrates how crucial the camera calibration procedure is for machine vision-based measurement systems since it has a direct impact on measurement accuracy. Models for camera calibration are being developed by various researchers. However, they fall short in addressing camera calibration errors, estimating measurement uncertainty, and camera lens distortion errors in the X and Y axes. Furthermore, only the camera calibration is the subject of the various researchers' proposed methods. As a result, a method for handling errors in calibration, camera lens distortion, and object measurement still needs to be developed. These errors still require optimization. While MCDM techniques have many benefits for optimizing systems, camera calibration and measurement in machine vision systems still need to be optimized. Instead of developing new mathematical formulas, this work presents a reliable statistical methodology for camera calibration and a measurement called 'Cyclic-Lead-Follower (CLF)'. The CLF method's results are further optimized through the use of different MCDM techniques, including TOPSIS, WASPAS, WSM, and WPM. The best measuring conditions are further confirmed by applying the Taguchi analysis.
2.Experimental setup
In the field of metrology, the calibration accuracy of the machine vision system has an immediate effect on the measurement outcomes. The measurement results are also impacted by errors that occur during the calibration process. Hence, to further enhance the accuracy of measurements, it is essential to strengthen the calibration process against errors as opposed to devising an entirely new calibration methodology. The camera is calibrated by the calibration piece in a traditional camera calibration system, and the calibration value is then used to take measurements of the specified object. A calibration piece is used in this investigation to calibrate the smartphone camera due to its simplicity and ability to demonstrate accuracy. In metrology, slip gauges are the primary gauges and are used to calibrate and compare the readings of measuring instruments. The expanding number of smartphone applications is due to the progress made in camera and image processing systems, which have enabled the optimization of intrinsic parameters. As input variables, the 'number of measurements (M)', 'camera distance (D)', and 'light source color (C)' were utilized in this investigation. These parameters were determined through a screening experiment.
Figure 1 depicts the conceptual experimental setup in which the 'distance between the smartphone camera lens and slip gauge (a)', the 'distance between the light source and slip gauge (b)', the 'distance between the smartphone camera lens and light source (c)', the 'distance between the smartphone camera lens and floor (d)', and the 'distance between slip gauge and floor (e)' were determined during the initial screening experiment. Once determined, each of these distances remained fixed for the course of the experiment.
For purposes of calibration, four slip gauges (SA, SB, SC, SD) were utilized, each with a standard size of 20 mm, 30 mm, 40 mm, and 50 mm, respectively. The slip gauge images were taken using a smartphone camera featuring a resolution of 48 megapixels. The parameters of the intrinsic camera were maintained for the duration of the experiments. Slip gauges have a standard value, so it is simple to find variations in measurements by subtracting the measured value from the standard value. The experimental configuration is shown in figure 2.
For each of the measurements, three distinct light sources 'Red, Green, and Blue (RGB)' with a power rating of 3 watts were used. An image captured in image processing is well described in the RGB color space. Furthermore, it was noted that the captured image was impacted by noise generated by a light source throughout the screening experiment. As a result, RGB light sources were employed to generate the desired background while minimizing image noise because of its monochromatic characteristics. As a consequence, the edges of the slip gauges are readily visible. The images captured by the smartphone camera under RGB light sources are displayed in figure 3.
Slip gauges are rectangular in shape and have sharp edges. In general, the implementation of RGB light sources together with slip gauges reduces the number of image processing steps needed. For each slip gauge, pixel values were determined using the Coslab software. Preliminary arrangements were undertaken for the smartphone camera setup, including aligning the smartphone camera in parallel with the slip gauges, ensuring that the camera base and slip gauges were both parallel to the floor, positioning the slip gauges at an identical distance from the lens of the smartphone camera, and maintaining RGB light sources in a stationary position. Since three input variables typically influenced the magnitude of the calibration errors, four slip gauges were employed during the calibration and measurement process.
Taguchi 'Design of Experiment (DOE)' method has been used by many researchers in their studies. Taguchi DOE is a renowned DOE that conducts experiments utilizing orthogonal arrays, which are combinations of distinct input variables. One advantage of the Taguchi DOE is that it minimizes or optimizes the number of experimental runs required to complete the experiment. For this investigation, the Taguchi DOE was chosen [22–24]. The three levels of each input factor were used in the development of the L27 orthogonal array. The Taguchi L27 DOE with the measured pixel values of four slip gauges is displayed in table 1. The Taguchi L27 DOE was executed in three replicas, and the mean of the three output values obtained was utilized for the analysis (refer to table 1). The pixel measurements of the slip gauges were performed using the Coslab software, as illustrated in figure 4.
Table 1.Taguchi L27 DOE with output measurements.
Slip gauges measured values in pixel | |||||||
---|---|---|---|---|---|---|---|
Exp. run | D in mm | C | M | SA | SB | SC | SD |
1 | 500 | 1 | 3 | 121.7333 | 186.8000 | 247.0667 | 307.2340 |
2 | 500 | 1 | 5 | 122.1905 | 186.5714 | 247.0952 | 306.9962 |
3 | 500 | 1 | 7 | 120.7844 | 186.0122 | 246.6700 | 306.0933 |
4 | 500 | 2 | 3 | 121.3810 | 186.0476 | 247.3810 | 307.1138 |
5 | 500 | 2 | 5 | 121.0327 | 185.4907 | 246.4147 | 306.1067 |
6 | 500 | 2 | 7 | 121.1744 | 185.4578 | 247.1222 | 306.2722 |
7 | 500 | 3 | 3 | 121.0000 | 187.4444 | 246.2222 | 305.9578 |
8 | 500 | 3 | 5 | 121.3540 | 186.2753 | 247.2060 | 306.8960 |
9 | 500 | 3 | 7 | 121.4776 | 185.7361 | 247.0233 | 306.9600 |
10 | 600 | 1 | 3 | 98.4444 | 145.1111 | 196.1111 | 244.8889 |
11 | 600 | 1 | 5 | 99.2381 | 146.7619 | 197.2857 | 245.6190 |
12 | 600 | 1 | 7 | 99.2667 | 146.9333 | 196.5333 | 245.2667 |
13 | 600 | 2 | 3 | 99.3333 | 145.8889 | 197.0000 | 245.3711 |
14 | 600 | 2 | 5 | 98.6667 | 145.9048 | 197.3333 | 244.5467 |
15 | 600 | 2 | 7 | 97.5556 | 145.4444 | 197.1111 | 245.1111 |
16 | 600 | 3 | 3 | 99.6667 | 145.8667 | 197.2667 | 245.3387 |
17 | 600 | 3 | 5 | 98.3810 | 146.4762 | 196.9524 | 245.1429 |
18 | 600 | 3 | 7 | 99.0667 | 145.6000 | 197.2667 | 244.9333 |
19 | 700 | 1 | 3 | 87.2857 | 130.4762 | 173.2857 | 216.2857 |
20 | 700 | 1 | 5 | 87.3333 | 131.3333 | 172.4000 | 215.3333 |
21 | 700 | 1 | 7 | 87.4667 | 129.6667 | 172.2667 | 214.4667 |
22 | 700 | 2 | 3 | 87.4444 | 128.5556 | 172.7778 | 214.8889 |
23 | 700 | 2 | 5 | 86.8667 | 129.3333 | 172.9333 | 215.1333 |
24 | 700 | 2 | 7 | 88.0000 | 129.4444 | 173.1111 | 215.4444 |
25 | 700 | 3 | 3 | 88.0000 | 130.3333 | 172.5556 | 216.1111 |
26 | 700 | 3 | 5 | 87.2222 | 129.8571 | 173.1905 | 215.8095 |
27 | 700 | 3 | 7 | 87.6190 | 131.9048 | 173.4286 | 215.6667 |
3.Proposed cyclic-lead-follower (CLF) method
The traditional approach to camera calibration entails using a calibration piece, whereby the camera is simply calibrated to correspond with the captured image of the calibration piece. By dividing the measured pixel value by the standard millimeter (mm) value of the calibration piece, one pixel is determined. To obtain the millimeter value, the actual measurement of the desired object entails multiplying the calculated pixel value by the obtained calibration value. Despite its simplicity and robustness, this method fails to adequately mitigate the impact of calibration errors. As illustrated in figure 5, the concept of CLF was utilized to overcome this issue. In the beginning, a slip gauge SA was employed for calibration. Equation (1) was utilized to determine the calibration value for one pixel per unit mm.
To obtain the measured value in millimeters of length, the calibrated value for a single pixel is multiplied by the measured pixel values of the remaining three slip gauges: SB, SC, and SD. Subsequently, the remaining slip gauges were classified as follower slip gauges, while a slip gauge SA was classified as a lead slip gauge. In the same manner as the SA slip gauge, other slip gauges were utilized as lead, and the process was repeated. In the end, the average of three values obtained from each slip gauge was computed. Given that slip gauges were regarded as primary gauges with standard values, the percentage errors for each slip gauge were readily computed using equation (2).
The slip gauge SA has a standard (reference) value that was considered as '20 mm'. Similar to slip gauge SA, other slip gauges have reference values of '30 mm, 40 mm, and 50 mm', respectively. Each experimental run of Taguchi DOE has four different percentage errors, which were calculated from four slip gauges, and the percentage errors were determined by using the 'Exponential Moving Average (EMA)' method. The EMA method is sometimes also called the 'Exponentially Weighted Moving Average (EWMA)' method, in which the estimated value for each experimental run was calculated by using equation (3) [25–28].
In equation (3), 'Fnext ' is the forecasted next value; 'Pactual ' is the previous actual value; and 'Pforecasted ' is the previous forecasted value. The 'α' value varies from 0 to 1, but here, for simplicity, it was taken as 0.5. One of the objectives of this study was to minimize the percentage errors in camera calibration and measurement and to fulfill this, the EMA method was applied to the percentage errors.
The reason to use the EMA method is that it is one of the popular statistical methods used in forecasting and gives better-forecasted values (for minimization) as compared with the 'Simple Average (SiA)' and 'Simple Moving Average (SMA)' (see figures 8 and 9). The EMA value for slip gauge SB was calculated based on the percentage error value of slip gauge SA, and this process was continued up to the slip gauge SD, from which the final EMA value was estimated. During the calculation of EMA for slip gauge SB, the previous actual value and the previous forecasted value were both considered to be the same percentage error value for slip gauge SA. Table 2 shows the measured values of slip gauges with their respective percentage errors and the calculated EMA values. The percentage error can be positive or negative. Therefore, all values of the percentage errors were considered positive (absolute) values based on which EMA values were calculated. The objective of this study was to optimize the calibration errors. Therefore, various MCDM techniques were applied, from which optimization of the percentage errors was performed, which is explained in the results and discussion section.
Table 2.Measured values, absolute percentage errors, and EMA values of slip gauges.
Measured values of slip gauges (mm) | Absolute percentage errors (%) | ||||||||
---|---|---|---|---|---|---|---|---|---|
Exp. run | SA | SB | SC | SD | SA | SB | SC | SD | EMA |
1 | 19.6900 | 30.4444 | 40.1595 | 49.8531 | 1.5499 | 1.4813 | 0.3987 | 0.2938 | 0.6255 |
2 | 19.7764 | 30.3756 | 40.1401 | 49.7698 | 1.1182 | 1.2520 | 0.3503 | 0.4604 | 0.6140 |
3 | 19.5988 | 30.4498 | 40.3069 | 49.8957 | 2.0059 | 1.4992 | 0.7673 | 0.2087 | 0.7343 |
4 | 19.6536 | 30.3425 | 40.3087 | 49.9278 | 1.7322 | 1.1418 | 0.7718 | 0.1443 | 0.6243 |
5 | 19.6639 | 30.3533 | 40.2740 | 49.9266 | 1.6805 | 1.1778 | 0.6850 | 0.1468 | 0.6019 |
6 | 19.6657 | 30.3018 | 40.3688 | 49.8893 | 1.6713 | 1.0061 | 0.9220 | 0.2213 | 0.6758 |
7 | 19.5989 | 30.6887 | 40.1144 | 49.7479 | 2.0054 | 2.2958 | 0.2859 | 0.5043 | 0.8613 |
8 | 19.6505 | 30.3962 | 40.2765 | 49.8877 | 1.7474 | 1.3208 | 0.6913 | 0.2246 | 0.6686 |
9 | 19.6929 | 30.3032 | 40.2686 | 49.9410 | 1.5353 | 1.0105 | 0.6716 | 0.1179 | 0.5451 |
10 | 20.1771 | 29.5688 | 40.1422 | 50.1095 | 0.8856 | 1.4372 | 0.3554 | 0.2190 | 0.4887 |
11 | 20.2026 | 29.7366 | 40.0829 | 49.8361 | 1.0131 | 0.8779 | 0.2073 | 0.3278 | 0.4521 |
12 | 20.2359 | 29.8209 | 39.9298 | 49.8038 | 1.1795 | 0.5971 | 0.1756 | 0.3924 | 0.4622 |
13 | 20.2791 | 29.5746 | 40.1060 | 49.8941 | 1.3953 | 1.4179 | 0.2650 | 0.2118 | 0.5238 |
14 | 20.1535 | 29.6608 | 40.3071 | 49.8076 | 0.7676 | 1.1308 | 0.7676 | 0.3849 | 0.6217 |
15 | 19.9399 | 29.6674 | 40.4252 | 50.1830 | 0.3007 | 1.1088 | 1.0629 | 0.3660 | 0.6249 |
16 | 20.3399 | 29.5254 | 40.1198 | 49.8125 | 1.6996 | 1.5821 | 0.2996 | 0.3749 | 0.6726 |
17 | 20.0654 | 29.8005 | 40.1826 | 49.9436 | 0.3271 | 0.6649 | 0.4565 | 0.1128 | 0.2945 |
18 | 20.2410 | 29.5467 | 40.2467 | 49.8602 | 1.2052 | 1.5109 | 0.6167 | 0.2796 | 0.6335 |
19 | 20.1320 | 30.0591 | 39.8693 | 49.7379 | 0.6601 | 0.1971 | 0.3267 | 0.5241 | 0.4509 |
20 | 20.1636 | 30.3478 | 39.6309 | 49.4874 | 0.8181 | 1.1594 | 0.9228 | 1.0253 | 0.9905 |
21 | 20.3126 | 29.9959 | 39.8026 | 49.4860 | 1.5630 | 0.0136 | 0.4934 | 1.0280 | 0.8344 |
22 | 20.3323 | 29.6923 | 40.0129 | 49.6816 | 1.6617 | 1.0256 | 0.0322 | 0.6368 | 0.6624 |
23 | 20.1437 | 29.9172 | 40.0404 | 49.7316 | 0.7184 | 0.2761 | 0.1011 | 0.5367 | 0.4180 |
24 | 20.3838 | 29.7902 | 39.8797 | 49.5592 | 1.9192 | 0.6993 | 0.3009 | 0.8815 | 0.8433 |
25 | 20.3383 | 29.9960 | 39.6196 | 49.6523 | 1.6915 | 0.0134 | 0.9511 | 0.6953 | 0.7985 |
26 | 20.1678 | 29.9513 | 39.9497 | 49.7284 | 0.8388 | 0.1623 | 0.1256 | 0.5432 | 0.4281 |
27 | 20.1500 | 30.3707 | 39.7462 | 49.3402 | 0.7500 | 1.2358 | 0.6345 | 1.3195 | 1.0666 |
4.Percentage error optimization using MCDM techniques
MCDM methods are simple and are effectively used in a 'multi-objective optimization' problem. To use any MCDM method, it is necessary to define the weights for each output response. One popular method for determining the weights of each output response is AHP. In this study, the percentage errors of four slip gauges and their calculated EMA were chosen for optimizing important parameters. Hence, the weight for each optimizing parameter was chosen as 0.20. WSM is the simplest method of MCDM, where the performance score is calculated by row-wise summation of the weighted normalized matrix, while in WPM, the performance score is calculated by row-wise multiplication of the weighted normalized matrix. The WASPAS is a modified method that uses a combination of the WSM and WPM. The performance score of the WASPAS method is calculated by using equation (4). Where 'Q1' is the performance score of WSM while 'Q2' is the performance score of WPM [19]. While calculating the performance score, all the percentage error values and calculated EMA of slip gauges were considered as non-beneficial criteria.
In equation (4), the value of 'λ' varies from 0 to 1. In this study, the value of 'λ' was considered to be 0.5. Finally, rank is assigned to the performance score from the highest to the lowest order. Steps to calculate the performance score for the WSM, WPM, and WASPAS methods are explained in [19, 20]. By using these steps, the performance scores of WSM, WPM, and WASPAS were calculated. The TOPSIS method is a popular method used to optimize the output responses in 'multi-objective optimization'. The TOPSIS method provides practical solutions with a sequence of the best alternatives. The TOPSIS methods were also found to be suitable for the output responses having different characteristics. The steps used in TOPSIS to assign the ranks to the number of experimental runs are simple and are explained in [29–31]. By using these steps, the closeness coefficient values and their associated ranks were calculated. Table 3 shows the results of the four MCDM methods used for the analysis of the percentage errors and calculated EMA of the slip gauges. It is observed that all four MCDM methods have given the same rank #1 to the experimental run #17. Thus, experimental run #17 was identified for the initial measuring conditions.
Table 3.Ranking of the MCDM methods.
Exp. run | WSM | Rank | WPM | Rank | WASPAS | Rank | TOPSIS | Rank |
---|---|---|---|---|---|---|---|---|
1 | 0.227744 | 21 | 0.120681 | 18 | 0.174213 | 18 | 0.574846 | 13 |
2 | 0.219258 | 22 | 0.125449 | 16 | 0.172354 | 21 | 0.603406 | 10 |
3 | 0.228526 | 20 | 0.104019 | 24 | 0.166273 | 22 | 0.496342 | 23 |
4 | 0.296157 | 12 | 0.125645 | 15 | 0.210901 | 15 | 0.560668 | 15 |
5 | 0.299051 | 10 | 0.129158 | 14 | 0.214105 | 14 | 0.57685 | 12 |
6 | 0.234767 | 18 | 0.11318 | 21 | 0.173974 | 19 | 0.532582 | 20 |
7 | 0.166818 | 24 | 0.094493 | 25 | 0.130656 | 24 | 0.434877 | 25 |
8 | 0.234369 | 19 | 0.112449 | 22 | 0.173409 | 20 | 0.53928 | 18 |
9 | 0.350909 | 4 | 0.145098 | 12 | 0.248004 | 9 | 0.61269 | 9 |
10 | 0.311448 | 9 | 0.154813 | 10 | 0.233131 | 10 | 0.651366 | 7 |
11 | 0.29261 | 13 | 0.173593 | 9 | 0.233102 | 11 | 0.73048 | 6 |
12 | 0.277104 | 15 | 0.180585 | 7 | 0.228845 | 12 | 0.740547 | 5 |
13 | 0.288278 | 14 | 0.14924 | 11 | 0.218759 | 13 | 0.635628 | 8 |
14 | 0.242493 | 16 | 0.121983 | 17 | 0.182238 | 16 | 0.568724 | 14 |
15 | 0.364407 | 2 | 0.139658 | 13 | 0.252033 | 8 | 0.537787 | 19 |
16 | 0.206334 | 23 | 0.116213 | 20 | 0.161274 | 23 | 0.55029 | 17 |
17 | 0.602001 | 1 | 0.264928 | 1 | 0.433465 | 1 | 0.780787 | 1 |
18 | 0.235825 | 17 | 0.116706 | 19 | 0.176266 | 17 | 0.555071 | 16 |
19 | 0.298095 | 11 | 0.21203 | 5 | 0.255063 | 7 | 0.751433 | 4 |
20 | 0.164287 | 26 | 0.086508 | 27 | 0.125398 | 27 | 0.3475 | 26 |
21 | 0.341315 | 6 | 0.216835 | 4 | 0.279075 | 4 | 0.511775 | 21 |
22 | 0.363184 | 3 | 0.179478 | 8 | 0.271331 | 6 | 0.593284 | 11 |
23 | 0.340037 | 7 | 0.248995 | 3 | 0.294516 | 2 | 0.77748 | 2 |
24 | 0.15201 | 27 | 0.107484 | 23 | 0.129747 | 25 | 0.494177 | 24 |
25 | 0.348549 | 5 | 0.204762 | 6 | 0.276656 | 5 | 0.506822 | 22 |
26 | 0.318561 | 8 | 0.255224 | 2 | 0.286893 | 3 | 0.771008 | 3 |
27 | 0.164828 | 25 | 0.087748 | 26 | 0.126288 | 26 | 0.338098 | 27 |
5.Results and discussion
As already pointed out, the traditional method of calibrating a camera makes use of a calibration piece for which the camera is calibrated, and measurements are thereafter performed based on that calibrated value. Calibration errors are not effectively handled by this method, though. An innovative CLF method is used to solve this issue successfully. The results of the CLF method were compared with those of the traditional approach in order to determine the efficacy of the CLF method. For the traditional method, equation (1) was used to calibrate slip gauge SA to determine the calibrated value of one pixel, and the calibrated value of slip gauge SA was then used to measure the other slip gauges. The percentage errors calculated in a traditional method using equation (2) were further analyzed by using two statistical methods of forecasting, such as SiA and EMA. SiA values were calculated by simply taking the mean values of three percentage errors for each experimental run, while EMA values were calculated by using equation (3). Table 4 shows the comparative results of the 'traditional method' with the 'CLF method'.
Table 4.Comparative results of traditional method with CLF method.
Percentage errors in a traditional method (%) | CLF (%) | Percentage error reduction in CLF method (%) | |||||||
---|---|---|---|---|---|---|---|---|---|
Exp. run | SB | SC | SD | SiA | EMA | SiA | EMA | SiA | EMA |
1 | 2.300110 | 1.478642 | 0.953122 | 1.577291 | 1.421249 | 0.930928 | 0.6254863 | −40.98 | −55.99 |
2 | 1.792673 | 1.110678 | 0.497584 | 1.133645 | 0.974630 | 0.795197 | 0.6140121 | −29.85 | −37.00 |
3 | 2.668973 | 2.111659 | 1.368462 | 2.049698 | 1.879389 | 1.120279 | 0.7343148 | −45.34 | −60.93 |
4 | 2.183863 | 1.902707 | 1.206591 | 1.764387 | 1.624938 | 0.947521 | 0.6243489 | −46.30 | −61.58 |
5 | 2.171131 | 1.796760 | 1.164975 | 1.710955 | 1.574460 | 0.922529 | 0.601938 | −46.08 | −61.77 |
6 | 2.033493 | 1.969612 | 1.101259 | 1.701455 | 1.551406 | 0.955158 | 0.6758217 | −43.86 | −56.44 |
7 | 3.275176 | 1.744720 | 1.143067 | 2.054321 | 1.826507 | 1.272840 | 0.8612506 | −38.04 | −52.85 |
8 | 2.331654 | 1.853256 | 1.157275 | 1.780728 | 1.624865 | 0.995986 | 0.6686078 | −44.07 | −58.85 |
9 | 1.931595 | 1.674422 | 1.075409 | 1.560475 | 1.439208 | 0.833823 | 0.5450724 | −46.57 | −62.13 |
10 | 1.730625 | 0.395034 | 0.496614 | 0.874091 | 0.779722 | 0.724316 | 0.4887246 | −17.13 | −37.32 |
11 | 1.407550 | 0.599808 | 0.998081 | 1.001813 | 1.000880 | 0.606514 | 0.4520937 | −39.46 | −54.83 |
12 | 1.320797 | 1.007388 | 1.168570 | 1.165585 | 1.166331 | 0.586146 | 0.4621593 | −49.71 | −60.37 |
13 | 2.087994 | 0.838926 | 1.192841 | 1.373254 | 1.328151 | 0.822487 | 0.5238058 | −40.11 | −60.56 |
14 | 1.415701 | 0.000000 | 0.859459 | 0.758387 | 0.783655 | 0.762753 | 0.6216687 | 0.58 | −20.67 |
15 | 0.607441 | 1.025057 | 0.501139 | 0.711212 | 0.658694 | 0.709608 | 0.6249089 | −0.23 | −5.13 |
16 | 2.430323 | 1.036789 | 1.536321 | 1.667811 | 1.634939 | 0.989067 | 0.6725848 | −40.70 | −58.86 |
17 | 0.742175 | 0.096805 | 0.329138 | 0.389373 | 0.374314 | 0.390328 | 0.294541 | 0.25 | −21.31 |
18 | 2.018843 | 0.437416 | 1.103634 | 1.186631 | 1.165882 | 0.903111 | 0.6334881 | −23.89 | −45.66 |
19 | 0.345517 | 0.736498 | 0.883797 | 0.655271 | 0.712402 | 0.427030 | 0.4508992 | −34.83 | −36.71 |
20 | 0.254453 | 1.297710 | 1.374046 | 0.975403 | 1.075064 | 0.981387 | 0.9905181 | 0.61 | −7.86 |
21 | 1.168699 | 1.524390 | 1.920732 | 1.537940 | 1.633638 | 0.774493 | 0.8344375 | −49.64 | −48.92 |
22 | 1.990682 | 1.207116 | 1.702668 | 1.633489 | 1.650784 | 0.839073 | 0.6623592 | −48.63 | −59.88 |
23 | 0.741878 | 0.460476 | 0.936301 | 0.712885 | 0.768739 | 0.408088 | 0.417959 | −42.76 | −45.63 |
24 | 1.936027 | 1.641414 | 2.070707 | 1.882716 | 1.929714 | 0.950224 | 0.8432841 | −49.53 | −56.30 |
25 | 1.262626 | 1.957071 | 1.767677 | 1.662458 | 1.688763 | 0.837815 | 0.7985423 | −49.60 | −52.71 |
26 | 0.746133 | 0.718835 | 1.030027 | 0.831665 | 0.881256 | 0.417476 | 0.4281298 | −49.80 | −51.42 |
27 | 0.362319 | 1.032609 | 1.543478 | 0.979469 | 1.120471 | 0.984978 | 1.0666311 | 0.56 | −4.81 |
In table 4, the reductions in CLF values concerning the values of the traditional method were determined by using equation (5).
Reducing the percentage of errors was the study's goal. As a result, table 4 shows that the SiA and EMA values for the CLF method are significantly lower than those for the traditional method, which are also shown in figures 6 and 7. Thus, the comparative analysis demonstrates that the proposed 'CLF method' outperforms the conventional method.
The EMA method, which can be seen in figures 8 and 9, estimates the percentage errors in a way that is well-minimized, which is why it has the potential to be used instead of the SiA and SMA methods in the CLF method. The SMA values were calculated based on the average of the previous two values, resulting in three SMA values in the first iteration, two SMA values in the second, and finally just one SMA value for each experimental run after the third iteration, which was used as the final SMA value for analysis.
From figure 9, it was also observed that the EMA method shows a well-distributed dome-shaped curve with a narrow range distribution of the percentage errors as compared with the SiA and SMA methods. The EMA method has an overall 'mean percentage error' value of 0.6377%, followed by a 'mean percentage error' value of 0.7844% by the SMA method and a 'mean percentage error' value of 0.8107% by the SiA method. The 'Analysis of Variance (ANOVA)' is a powerful tool that explains the relationship of the input factors with the respective output responses. 'ANOVA' also describes how much the percentage of the input factor influences varying the value of the output response. 'ANOVA' was applied to the millimeter values of the four slip gauges (see table 2), which are shown in table 5.
Table 5.Results of the ANOVA method applied on four slip gauges millimeter values in the CLF method.
SA | SB | SC | SD | ||||||
---|---|---|---|---|---|---|---|---|---|
Input factors | P-Value | PC | P-Value | PC | P-Value | PC | P-Value | PC | Average PC |
D | 0.000 | 88.97 | 0.000 | 80.39 | 0.000 | 67.33 | 0.001 | 60.05 | 74.18 |
C | 0.987 | 0.01 | 0.137 | 4.61 | 0.030 | 12.97 | 0.351 | 3.65 | 5.31 |
M | 0.518 | 0.86 | 0.675 | 0.74 | 0.575 | 1.38 | 0.653 | 1.37 | 1.09 |
D×C | 0.668 | 1.46 | 0.514 | 3.18 | 0.784 | 1.99 | 0.989 | 0.43 | 1.77 |
D×M | 0.477 | 2.31 | 0.503 | 3.26 | 0.810 | 1.81 | 0.217 | 11.13 | 4.63 |
C×M | 0.631 | 1.60 | 0.944 | 0.63 | 0.407 | 5.25 | 0.216 | 11.17 | 4.66 |
Where, bold values indicate the influence of input factors. PC = Percentage contribution (%).
From table 5, it was seen that the smartphone camera distance was found to be the most influencing input factor, having an average percentage contribution of 74.18% in a slip gauge measurement. This is because when the 'distance between the smartphone camera and the slip gauge' increases, the scale of the slip gauge image decreases accordingly. Due to this, the pixel value for each slip gauge decreases accordingly. The light source (RGB) was found to be the least significant input factor, having a percentage contribution of 5.31%. The slip gauges have a sharp edge that is much less influenced by the effects of the RGB light sources. Also, due to the sharpness of the edges, very few or no variations were observed for the number of measurements. Alternately, it was also deduced that the camera distance suppressed the effect of the RGB light source and a number of measurements by causing the variation in pixel values. However, the combination of the RGB light source having a monochromatic nature and the slip gauges having sharp edges helped to reduce the image processing steps. This shows the usefulness of the proposed experimental setup.
From table 3, it is evident that all four methods indicated that experimental run #17 was the best alternative and assigned it rank #1. Experimental run #17 had initial measuring conditions: smartphone camera distance at 600 mm, blue light source, and five numbers of measurements. All four methods gave the same choice for experimental run #17, which shows strong validation to determine the initial measuring conditions. From equation (4), it is clear that the WASPAS method represents the combined effect of the WSM and WPM. Therefore, the Taguchi analysis was performed for the WASPAS method and TOPSIS method only in Minitab software to determine the optimal measuring conditions.
Rank #1 is assigned to the higher values of the performance scores of the WASPAS method and the closeness coefficients of the TOPSIS method due to which Taguchi analysis of WASPAS and TOPSIS was performed for the 'larger-is-better' Taguchi design philosophy. The main effects plots for the WASPAS method and TOPSIS method are shown in figures 10 and 11.
From figure 10, the optimal measuring conditions for the WASPAS method were determined using Taguchi analysis, which included smartphone camera distance at 600 mm, a blue light source, and five numbers of measurements at experimental run #17. It shows that the Taguchi analysis of the WASPAS method showed the same optimal conditions as the initial measuring conditions. From figure 11, the optimal measuring conditions for the TOPSIS method using Taguchi analysis were determined as smartphone camera distance at 600 mm, a red light source, and five numbers of measurements at experimental run #11. To confirm the optimal measuring conditions, the experimental results of the initial measuring conditions (experimental run #17) were compared with the optimal measuring conditions (experimental run #11). From table 2 and figure 8, it was observed that the experimental results at experimental run #17 show better results than the experimental results at experimental run #11. Thus, initial measuring conditions were confirmed as the final optimal measuring conditions at experimental run #17.
To validate the final optimal measuring conditions, a validation experiment was performed, for which the percentage errors were calculated. Table 6 shows the measurement results of the slip gauges in the validation experiment. In the validation experiment, each slip gauge was calibrated using equation (1), and using EMA, the final calibrated value for one pixel was determined to be 0.2038 mm. Table 7 shows the percentage errors and estimated EMA value in the validation experiment. From the final optimal conditions, a validation experiment was performed, which was the smartphone camera distance from the slip gauge at 600 mm under blue light, and five number of measurements were chosen from experimental run #17.
Table 6.Measurement of four slip gauges in the validation experiment.
Pixel values of four slip gauges | Millimeter values of four slip gauges | |||||||
---|---|---|---|---|---|---|---|---|
Sr No. | SA | SB | SC | SD | SA | SB | SC | SD |
1 | 101 | 146 | 197 | 245 | 20.5838 | 29.7548 | 40.1486 | 49.9310 |
2 | 100 | 147 | 197 | 245 | 20.3800 | 29.9586 | 40.1486 | 49.9310 |
3 | 98 | 145 | 198 | 247 | 19.9724 | 29.5510 | 40.3524 | 50.3386 |
4 | 99 | 144 | 196 | 247 | 20.1762 | 29.3472 | 39.9448 | 50.3386 |
5 | 99 | 147 | 196 | 246 | 20.1762 | 29.9586 | 39.9448 | 50.1348 |
Average | 99.4 | 145.8 | 196.8 | 246 | 20.2577 | 29.7140 | 40.1078 | 50.1348 |
Table 7.Estimated percentage errors in a validation experiment.
Percentage errors (%) | |||||
---|---|---|---|---|---|
Sr No. | SA | SB | SC | SD | Estimated EMA |
1 | 1.3438 | 0.9532 | 0.3800 | 0.3800 | 0.5721 |
The average values from table 6 were used to calculate the percentage errors using equation (2). The EMA method was applied to the average percentage errors of the validation experiment, and the final percentage error value was estimated. From table 7, the final estimated value of the percentage error was determined as 0.5721%, which is approximately less than 1%. Thus, validation experiment results show that the proposed 'CLF method' is found to be effective in camera calibration and measurement.
6.Applicability of the proposed method
6.1.Estimation of the camera lens distortion errors
Any captured image is represented in two-dimensional image coordinates during image processing. When a camera captures images, it generates distortions along the 'x' and 'y' axes. This is caused by a problem known as camera lens distortion error, which occurs when the camera lens affects image acquisition. The proposed study estimated slip gauge measurement errors vertically. However, the proposed approach is also applicable for estimating camera lens distortion errors in the horizontal direction.
6.2.Camera calibration
The slip gauge-based camera calibration system proposed in this study was implemented. The proposed method can also be applied in combination with alternative calibration techniques, such as the utilization of checkerboards, calibration pieces, calibration grids, grids of circles, retroreflective targets, and so on. The proposed method has the capability to accurately determine the errors in camera calibration as a result of its statistical nature. Both on-machine and dedicated machine vision systems can derive advantages from such a scenario.
6.3.Distance and length measurement of measuring object
The slip gauges were used to estimate the pixel error using the proposed CLF method. In this case, the distance between the camera lens and the object being measured remained constant. A set of slip gauges can be positioned at a different distance from the camera lens to measure the distance between the camera lens and the measuring object. The values of the measured pixels vary depending on the distance. It is possible to calculate the distance between the camera lens and the measuring object using the inverse proportional relationship. To better comprehend this, consider the following two cases from figure 12, which illustrate the concept of distance and length measurement in machine vision systems.
In figure 12, 'a1' and 'a2' represent the slip gauge's pixel lengths, and 'b1' and 'b2' represent the separations between the camera lens and the slip gauge. When the slip gauge's position changes, the values of 'a1' and 'b1' also change to 'a2' and 'b2', respectively. It is seen that the pixel length of the slip gauge decreases as the distance between the camera lens and the slip gauge increases, indicating the inverse relationship that is expressed by equation (6).
In equation (6), 'a' stands for the slip gauge's pixel length, 'k' for the proportionality constant, and 'b' for the separation between the camera lens and the slip gauge. Therefore, using figure 12 and equation (6), it is possible to define the final relationship between the lengths of the slip gauge pixels and the separations between the camera lens and slip gauge, which is represented by equation (7).
It is simple to measure the slip gauge length or the camera lens distance by using equation (7). In figure 12, the measurement of the object can be carried out quickly and simply by combining the measuring object and the slip gauge. As previously covered in detail, the proposed CLF method performs better than the traditional calibration method, which increases measurement accuracy by decreasing pixel measurement errors. As a result, measuring an object's distance and length can be performed well by employing the proposed CLF method.
6.4.On-machine measurement system
The on-machine measurement system based on machine vision is one of the measurement methods. This involves inspecting the machined component on the machining center itself. Cutting down on the time required for quality inspections has the greater benefit of increasing the manufacturing system's productivity. However, in the on-machine measurement, it becomes challenging to maintain the distance between the camera lens and the machined component due to the complexity of the camera calibration process. Additionally, the noise produced during image acquisition as a result of various uncertainty factors, such as temperature, vibration, light intensity, etc, reduces the accuracy of the on-machine measurement system. The proposed CLF method, which was found to be reliable, accurate, and effective for estimating errors in image measurement, uses RGB light sources with slip gauges. These benefits are important for putting the suggested method for the on-machine measurement system into practice. The proposed method is also regarded as cost-effective because of its simplicity.
7.Conclusions
The objective of this research was to develop a robust statistical technique for calibrating and measuring cameras, with the requirement that errors are minimized through the use of diverse MCDM methods. As primary gauges, slip gauges are utilized in the field of metrology. Therefore, slip gauges were employed to conduct the measurements in this study. Four MCDM methods were employed to optimize the camera calibration errors: WSM, WPM, WASPAS, and TOPSIS. Following an experimental investigation, the subsequent deductions are made:
- It was determined that the proposed CLF method is more efficient than the traditional camera calibration method in addressing calibration errors. One drawback of the CLF method is that it uses four slip gauges; however, this adds to the robustness and uniqueness of the calibration system by enabling each slip gauge to serve as both a lead and a follower for calibration and measurement. The experimental outcomes demonstrate that the EMA method incorporated into the CLF method yields a significantly reduced percentage error in comparison to traditional camera calibration techniques.
- The outcomes of the WASPAS methods are supported by the WSM and WPM methods, indicating that the three methods are relevant to one another. This provides substantial evidence in favor of the selection of the optimal conditions that are desired. In comparison to TOPSIS, the WASPAS method exhibits superior performance when applied to output responses that contain unit dimensions.
- The primary determinant in the variation of percentage errors in slip gauge measurement is the distance between the smartphone camera and the slip gauge. This distance accounts for an average percentage contribution of over 74.18%. Future work that fixes the distance between the smartphone camera and the slip gauge system will increase its robustness and accuracy. The effectiveness of the slip gauges as a calibration piece is demonstrated by the fact that the sharp edges of the slip gauges suppress the effect of the light sources on an image. The integration of the slip gauges and the RGB light source significantly reduces the number of image processing stages, thereby representing the simplicity of the camera calibration system.
- As shown in table 4, the implementation of the CLF method led to an average percentage error reduction of around 34% for SiA values and 46% for EMA values, in comparison to the traditional calibration method. As depicted in figure 9, the histogram analysis reveals that the CLF-EMA method exhibited the least amount of percentage error in comparison to the CLF-SiA and CLF-SMA methods. It was determined that the mean percentage errors for CLF-SiA, CLF-SMA, and CLF-EMA were 0.8107%, 0.7844%, and 0.6377% respectively.
- In a validation experiment employing the EMA method, the estimated percentage error of camera calibration was 0.5721%. This value indicates that the proposed CLF method is more efficient in handling camera lens distortion errors. As a result, the CLF method must be investigated under a variety of machine vision system circumstances in future research.
- Future studies may incorporate the proposed method for vision-based on-machine measurement systems.
Acknowledgments
The authors express their gratitude to 'Dr. Babasaheb Ambedkar Technological University, Lonere, Maharashtra, India' for providing funding support through the 'TEQIP III' fellowship to conduct this research. This experiment was conducted at the 'Centre for Advanced Machining Technology (CAMT)', a facility of the 'Mechanical Engineering Department' at 'Dr. Babasaheb Ambedkar Technological University'.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files).
Conflict of interest
There is no conflict of interest from the author's side.