Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (2024)

In metrology systems, machine vision systems are often utilized for non-contact inspection. The most important phase in ensuring measurement accuracy is camera calibration and estimation of pixel measurement errors, which establish the correspondence between image coordinates and object coordinates. Multiple calibration techniques improve the effectiveness of machine vision systems. However, a number of factors lead to variations in the camera calibration procedure, which must be optimized. This study explains a novel 'Cyclic-Lead-Follower' statistical methodology proposed for camera calibration and measurement to estimate the errors in pixel measurement, employing four slip gauges for measurement. Several multi-criteria decision-making techniques, including WSM, WPM, WASPAS, and TOPSIS, were used to optimize the results of the proposed Cyclic-Lead-Follower methods. The proposed Cyclic-Lead-Follower method improves the accuracy of the camera calibration and measurement system, which employs the exponential moving average statistical method when compared to the traditional calibration method. The proposed calibration method produces lower exponential moving average values than the traditional calibration method, with an average percentage error of approximately 46% in the exponential moving average. The use of an exponential moving average in a validation experiment of the Cyclic-Lead-Follower method decreased the measurement percentage errors, which were estimated to be 0.57%. The proposed method can be used in machine vision systems due to its robustness, accuracy, and cost-effectiveness.

1.Introduction

The first step in any machine vision system is to capture an image of the object using a camera. From there, the image goes through a series of processing steps to get the final result. To get dimensional information from a captured image, it is essential to establish a relationship between an object and the obtained image. Establishing the relationship between an object and its image is known as camera calibration. The metrology field relies on machine vision systems for non-contact inspection. Camera calibration is thus the most important step in determining the accuracy of measurements. For camera calibration, every object to be measured is defined in a 3D representation in the 'World Coordinate System (WCS)', whereas the obtained image is defined in a 2D representation in the 'Image Coordinate System (ICS)'. The relation between the ICS and the WCS is established through mathematical approaches.

There have been several studies of different approaches to camera calibration, including 'self-camera calibration', 'camera calibration by active vision', 'linear and non-linear methods', 'traditional camera calibration using a calibration piece', and so on [13]. Extensive research and development into camera calibration have led to its adoption in domains as diverse as 'machine vision', 'biomedicine', 'visual surveillance', and 'mobile robot navigation' [3]. As previously mentioned, the object is represented in 3D while the image is represented in 2D. However, the effect on calibration is produced by different parameters. While intrinsic parameters affect a 2D image representation, extrinsic parameters affect a 3D representation of the object. Notable research areas are machine vision systems and camera calibration. A discussion of a few of them is given below. For color calibration, Menesatti et al [4] employed the 'Thin-Plate Spline Interpolation' method. They evaluated the thin-plate spline interpolation algorithm for color calibration in comparison to a commercial calibration system and partial least squares analysis. The results showed the Thin-Plate Spline method's high efficiency, which has the potential to reshape the area of color quantification in the food sciences. Deng et al [5] described a camera calibration model that incorporates geometric parameters and lens distortion effects. The suggested algorithm improves performance in visual identification tasks by correctly optimizing camera settings using 'particle swarm optimization' and 'differential evolution'. Lee et al [6] proposed an efficient, position-dependent, and independently applied each sub-display camera-based technique for color calibration on tiled display systems. The method put forth can reduce both the spatial non-uniformity within each sub-display and the non-uniformity in the differences in color and luminance across sub-displays. To streamline extrinsic calibration efforts and increase measurement precision for shape data, Chen et al [7] proposed an improved calibration technique for dual camera-one projector pairs that was verified through experimental findings. To inspect the 'Printed Circuit Boards (PCB)', Heinemann et al [8] used 'ultra-close range normal case photogrammetry', which involved taking the picture with a close-up camera. 'Spacer rings' and 'extension tubes' were used to shorten the distance and increase the magnification during calibration. The suggested method was found to be effective for up-close inspection of the items. Korkalo et al [9] demonstrated an auto-calibration technique for a network of embedded depth cameras used for tracking people. Observations were used to determine the topology of the sensor network and the initial transformations. The technique then refines the parameters using global optimization and flexible transformations, increasing accuracy and precision. The accuracy of the method was examined using real-world data sets, and the results were satisfactory. For multi-spectral cameras, Wang et al [10] proposed a new color calibration technique that takes into consideration the impact of a light source on the output values of the camera. In comparison to conventional methods, the method greatly lowers color calibration errors by normalizing the 'Red, Green, and Blue (RGB)' values and creating a calibration model in chromaticity space. Schweikert et al [11] described a calibration technique that reduces sample motion error and achieves measurement uncertainty for pixel-wise in situ calibration for high-accuracy thermography of moving targets with locally varying emissivity. To enhance camera calibration for high-precision measurement, Lü et al [12] discussed several techniques, which include 'adaptive gamma correction', 'sub-pixel corner extraction', 'adaptive weight', and 'mutation particle swarm optimization'. They observed more precise camera parameters with an 'average calibration error' of 0.038 pixels.

The literature review mentioned above provides an overview of the various camera calibration techniques and studies. Various techniques have been employed in recent years to optimize the outcomes, but 'Multi Criteria Decision Making (MCDM)' techniques have proven to be effective and are widely used. Here, a few of them are discussed. Zavadskas et al [13] studied a few MCDM techniques developed in recent years. Additionally, a thorough investigation of the hybrid methodology used in various MCDM methods is required. To test the robustness and reliability of the MCDM methods, Maliene et al [14] performed sensitivity analysis on five MCDM techniques, including the 'Weighted Product Model (WPM)', 'Weighted Sum Model (WSM)', 'Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS)', and 'Complex Proportional Assessment (COPRAS)', as well as the 'revised Analytic Hierarchy Process (rAHP)'. Results show that the variation in how the alternatives' relative importance is rated has a direct impact on ranking uncertainty. For web service selection, Serrai et al [15] used 'Simple Additive Weighting (SAW)', 'Vise Kriterijumska Optimizacija I Kompromisno Resenje (VIKOR)', TOPSIS, and WPM where the 'Analytic Hierarchy Process (AHP)' method was used to compute the weights. The authors created a new normalization technique called the 'Optimized Method of Reference Ideal (OMRI)' to normalize data by value constraints. To verify the findings, the authors conducted numerous simulations on a real dataset. Rana and Patel [16] used WPM and TOPSIS methods to identify the best location for a small hydropower project. The AHP technique was used to assess the consistency of the decision matrix. The WPM and TOPSIS methods were used for the ranking, and the experiment's outcomes were satisfactory. Yildiz [17] experimentally studied the effect of electrical parameters on electrode wear ratio and material removal rate in mesoscale electrical discharge drilling. The best machining outcome was determined using VIKOR, TOPSIS, and 'Grey Relation Analysis (GRA)' techniques. Inconsistencies were eliminated by ranking the performance scores using the weighted sum and weighted product methods. Kabassi [18] applied four MCDM methods (SAW, WPM, TOPSIS, and 'Preference Ranking Organization Method for Enrichment Evaluation II (PROMETHEE II)' to evaluate environmental education programs and compare their performance. According to the sensitivity analysis, the TOPSIS method was more sensitive, and the SAW method was more robust. To choose the finest indoor environment among six apartments, Zavadskas et al [19] used the 'Weighted Aggregated Sum Product Assessment Method (WASPAS)' method in conjunction with a method of 'Multi-Attribute Decision Making' using an optimal alternative. The experimental results demonstrated that the WASPAS method can be quickly used to determine the best options, which enhances decision-making. The experimental findings of Abiola and Oke [20] demonstrated that the Taguchi technique and MCDM methods can be combined. They used the Taguchi method together with WSM, WPM, and WASPAS to minimize downtime in a manufacturing process. Based on the results of the experiment, they suggested using the WASPAS technique. To optimize the 'friction stir spot welding' parameters of Al 6061-T6 incorporated with silicon carbide, Chaudhary et al [21] used a hybrid WASPAS-Taguchi technique. Increased strength and hardness of the welded joint were the desired outcomes. The hybrid WASPAS-Taguchi technique was effective, according to the experimental results.

A review of the literature demonstrates how crucial the camera calibration procedure is for machine vision-based measurement systems since it has a direct impact on measurement accuracy. Models for camera calibration are being developed by various researchers. However, they fall short in addressing camera calibration errors, estimating measurement uncertainty, and camera lens distortion errors in the X and Y axes. Furthermore, only the camera calibration is the subject of the various researchers' proposed methods. As a result, a method for handling errors in calibration, camera lens distortion, and object measurement still needs to be developed. These errors still require optimization. While MCDM techniques have many benefits for optimizing systems, camera calibration and measurement in machine vision systems still need to be optimized. Instead of developing new mathematical formulas, this work presents a reliable statistical methodology for camera calibration and a measurement called 'Cyclic-Lead-Follower (CLF)'. The CLF method's results are further optimized through the use of different MCDM techniques, including TOPSIS, WASPAS, WSM, and WPM. The best measuring conditions are further confirmed by applying the Taguchi analysis.

2.Experimental setup

In the field of metrology, the calibration accuracy of the machine vision system has an immediate effect on the measurement outcomes. The measurement results are also impacted by errors that occur during the calibration process. Hence, to further enhance the accuracy of measurements, it is essential to strengthen the calibration process against errors as opposed to devising an entirely new calibration methodology. The camera is calibrated by the calibration piece in a traditional camera calibration system, and the calibration value is then used to take measurements of the specified object. A calibration piece is used in this investigation to calibrate the smartphone camera due to its simplicity and ability to demonstrate accuracy. In metrology, slip gauges are the primary gauges and are used to calibrate and compare the readings of measuring instruments. The expanding number of smartphone applications is due to the progress made in camera and image processing systems, which have enabled the optimization of intrinsic parameters. As input variables, the 'number of measurements (M)', 'camera distance (D)', and 'light source color (C)' were utilized in this investigation. These parameters were determined through a screening experiment.

Figure 1 depicts the conceptual experimental setup in which the 'distance between the smartphone camera lens and slip gauge (a)', the 'distance between the light source and slip gauge (b)', the 'distance between the smartphone camera lens and light source (c)', the 'distance between the smartphone camera lens and floor (d)', and the 'distance between slip gauge and floor (e)' were determined during the initial screening experiment. Once determined, each of these distances remained fixed for the course of the experiment.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (1)

For purposes of calibration, four slip gauges (SA, SB, SC, SD) were utilized, each with a standard size of 20 mm, 30 mm, 40 mm, and 50 mm, respectively. The slip gauge images were taken using a smartphone camera featuring a resolution of 48 megapixels. The parameters of the intrinsic camera were maintained for the duration of the experiments. Slip gauges have a standard value, so it is simple to find variations in measurements by subtracting the measured value from the standard value. The experimental configuration is shown in figure 2.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (2)

For each of the measurements, three distinct light sources 'Red, Green, and Blue (RGB)' with a power rating of 3 watts were used. An image captured in image processing is well described in the RGB color space. Furthermore, it was noted that the captured image was impacted by noise generated by a light source throughout the screening experiment. As a result, RGB light sources were employed to generate the desired background while minimizing image noise because of its monochromatic characteristics. As a consequence, the edges of the slip gauges are readily visible. The images captured by the smartphone camera under RGB light sources are displayed in figure 3.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (3)

Slip gauges are rectangular in shape and have sharp edges. In general, the implementation of RGB light sources together with slip gauges reduces the number of image processing steps needed. For each slip gauge, pixel values were determined using the Coslab software. Preliminary arrangements were undertaken for the smartphone camera setup, including aligning the smartphone camera in parallel with the slip gauges, ensuring that the camera base and slip gauges were both parallel to the floor, positioning the slip gauges at an identical distance from the lens of the smartphone camera, and maintaining RGB light sources in a stationary position. Since three input variables typically influenced the magnitude of the calibration errors, four slip gauges were employed during the calibration and measurement process.

Taguchi 'Design of Experiment (DOE)' method has been used by many researchers in their studies. Taguchi DOE is a renowned DOE that conducts experiments utilizing orthogonal arrays, which are combinations of distinct input variables. One advantage of the Taguchi DOE is that it minimizes or optimizes the number of experimental runs required to complete the experiment. For this investigation, the Taguchi DOE was chosen [2224]. The three levels of each input factor were used in the development of the L27 orthogonal array. The Taguchi L27 DOE with the measured pixel values of four slip gauges is displayed in table 1. The Taguchi L27 DOE was executed in three replicas, and the mean of the three output values obtained was utilized for the analysis (refer to table 1). The pixel measurements of the slip gauges were performed using the Coslab software, as illustrated in figure 4.

Table 1.Taguchi L27 DOE with output measurements.

Slip gauges measured values in pixel
Exp. runD in mmCMSASBSCSD
150013121.7333186.8000247.0667307.2340
250015122.1905186.5714247.0952306.9962
350017120.7844186.0122246.6700306.0933
450023121.3810186.0476247.3810307.1138
550025121.0327185.4907246.4147306.1067
650027121.1744185.4578247.1222306.2722
750033121.0000187.4444246.2222305.9578
850035121.3540186.2753247.2060306.8960
950037121.4776185.7361247.0233306.9600
106001398.4444145.1111196.1111244.8889
116001599.2381146.7619197.2857245.6190
126001799.2667146.9333196.5333245.2667
136002399.3333145.8889197.0000245.3711
146002598.6667145.9048197.3333244.5467
156002797.5556145.4444197.1111245.1111
166003399.6667145.8667197.2667245.3387
176003598.3810146.4762196.9524245.1429
186003799.0667145.6000197.2667244.9333
197001387.2857130.4762173.2857216.2857
207001587.3333131.3333172.4000215.3333
217001787.4667129.6667172.2667214.4667
227002387.4444128.5556172.7778214.8889
237002586.8667129.3333172.9333215.1333
247002788.0000129.4444173.1111215.4444
257003388.0000130.3333172.5556216.1111
267003587.2222129.8571173.1905215.8095
277003787.6190131.9048173.4286215.6667

3.Proposed cyclic-lead-follower (CLF) method

The traditional approach to camera calibration entails using a calibration piece, whereby the camera is simply calibrated to correspond with the captured image of the calibration piece. By dividing the measured pixel value by the standard millimeter (mm) value of the calibration piece, one pixel is determined. To obtain the millimeter value, the actual measurement of the desired object entails multiplying the calculated pixel value by the obtained calibration value. Despite its simplicity and robustness, this method fails to adequately mitigate the impact of calibration errors. As illustrated in figure 5, the concept of CLF was utilized to overcome this issue. In the beginning, a slip gauge SA was employed for calibration. Equation (1) was utilized to determine the calibration value for one pixel per unit mm.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (5)

To obtain the measured value in millimeters of length, the calibrated value for a single pixel is multiplied by the measured pixel values of the remaining three slip gauges: SB, SC, and SD. Subsequently, the remaining slip gauges were classified as follower slip gauges, while a slip gauge SA was classified as a lead slip gauge. In the same manner as the SA slip gauge, other slip gauges were utilized as lead, and the process was repeated. In the end, the average of three values obtained from each slip gauge was computed. Given that slip gauges were regarded as primary gauges with standard values, the percentage errors for each slip gauge were readily computed using equation (2).

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (6)

The slip gauge SA has a standard (reference) value that was considered as '20 mm'. Similar to slip gauge SA, other slip gauges have reference values of '30 mm, 40 mm, and 50 mm', respectively. Each experimental run of Taguchi DOE has four different percentage errors, which were calculated from four slip gauges, and the percentage errors were determined by using the 'Exponential Moving Average (EMA)' method. The EMA method is sometimes also called the 'Exponentially Weighted Moving Average (EWMA)' method, in which the estimated value for each experimental run was calculated by using equation (3) [2528].

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (7)

In equation (3), 'Fnext ' is the forecasted next value; 'Pactual ' is the previous actual value; and 'Pforecasted ' is the previous forecasted value. The 'α' value varies from 0 to 1, but here, for simplicity, it was taken as 0.5. One of the objectives of this study was to minimize the percentage errors in camera calibration and measurement and to fulfill this, the EMA method was applied to the percentage errors.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (8)

The reason to use the EMA method is that it is one of the popular statistical methods used in forecasting and gives better-forecasted values (for minimization) as compared with the 'Simple Average (SiA)' and 'Simple Moving Average (SMA)' (see figures 8 and 9). The EMA value for slip gauge SB was calculated based on the percentage error value of slip gauge SA, and this process was continued up to the slip gauge SD, from which the final EMA value was estimated. During the calculation of EMA for slip gauge SB, the previous actual value and the previous forecasted value were both considered to be the same percentage error value for slip gauge SA. Table 2 shows the measured values of slip gauges with their respective percentage errors and the calculated EMA values. The percentage error can be positive or negative. Therefore, all values of the percentage errors were considered positive (absolute) values based on which EMA values were calculated. The objective of this study was to optimize the calibration errors. Therefore, various MCDM techniques were applied, from which optimization of the percentage errors was performed, which is explained in the results and discussion section.

Table 2.Measured values, absolute percentage errors, and EMA values of slip gauges.

Measured values of slip gauges (mm)Absolute percentage errors (%)
Exp. runSASBSCSDSASBSCSDEMA
119.690030.444440.159549.85311.54991.48130.39870.29380.6255
219.776430.375640.140149.76981.11821.25200.35030.46040.6140
319.598830.449840.306949.89572.00591.49920.76730.20870.7343
419.653630.342540.308749.92781.73221.14180.77180.14430.6243
519.663930.353340.274049.92661.68051.17780.68500.14680.6019
619.665730.301840.368849.88931.67131.00610.92200.22130.6758
719.598930.688740.114449.74792.00542.29580.28590.50430.8613
819.650530.396240.276549.88771.74741.32080.69130.22460.6686
919.692930.303240.268649.94101.53531.01050.67160.11790.5451
1020.177129.568840.142250.10950.88561.43720.35540.21900.4887
1120.202629.736640.082949.83611.01310.87790.20730.32780.4521
1220.235929.820939.929849.80381.17950.59710.17560.39240.4622
1320.279129.574640.106049.89411.39531.41790.26500.21180.5238
1420.153529.660840.307149.80760.76761.13080.76760.38490.6217
1519.939929.667440.425250.18300.30071.10881.06290.36600.6249
1620.339929.525440.119849.81251.69961.58210.29960.37490.6726
1720.065429.800540.182649.94360.32710.66490.45650.11280.2945
1820.241029.546740.246749.86021.20521.51090.61670.27960.6335
1920.132030.059139.869349.73790.66010.19710.32670.52410.4509
2020.163630.347839.630949.48740.81811.15940.92281.02530.9905
2120.312629.995939.802649.48601.56300.01360.49341.02800.8344
2220.332329.692340.012949.68161.66171.02560.03220.63680.6624
2320.143729.917240.040449.73160.71840.27610.10110.53670.4180
2420.383829.790239.879749.55921.91920.69930.30090.88150.8433
2520.338329.996039.619649.65231.69150.01340.95110.69530.7985
2620.167829.951339.949749.72840.83880.16230.12560.54320.4281
2720.150030.370739.746249.34020.75001.23580.63451.31951.0666

4.Percentage error optimization using MCDM techniques

MCDM methods are simple and are effectively used in a 'multi-objective optimization' problem. To use any MCDM method, it is necessary to define the weights for each output response. One popular method for determining the weights of each output response is AHP. In this study, the percentage errors of four slip gauges and their calculated EMA were chosen for optimizing important parameters. Hence, the weight for each optimizing parameter was chosen as 0.20. WSM is the simplest method of MCDM, where the performance score is calculated by row-wise summation of the weighted normalized matrix, while in WPM, the performance score is calculated by row-wise multiplication of the weighted normalized matrix. The WASPAS is a modified method that uses a combination of the WSM and WPM. The performance score of the WASPAS method is calculated by using equation (4). Where 'Q1' is the performance score of WSM while 'Q2' is the performance score of WPM [19]. While calculating the performance score, all the percentage error values and calculated EMA of slip gauges were considered as non-beneficial criteria.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (9)

In equation (4), the value of 'λ' varies from 0 to 1. In this study, the value of 'λ' was considered to be 0.5. Finally, rank is assigned to the performance score from the highest to the lowest order. Steps to calculate the performance score for the WSM, WPM, and WASPAS methods are explained in [19, 20]. By using these steps, the performance scores of WSM, WPM, and WASPAS were calculated. The TOPSIS method is a popular method used to optimize the output responses in 'multi-objective optimization'. The TOPSIS method provides practical solutions with a sequence of the best alternatives. The TOPSIS methods were also found to be suitable for the output responses having different characteristics. The steps used in TOPSIS to assign the ranks to the number of experimental runs are simple and are explained in [2931]. By using these steps, the closeness coefficient values and their associated ranks were calculated. Table 3 shows the results of the four MCDM methods used for the analysis of the percentage errors and calculated EMA of the slip gauges. It is observed that all four MCDM methods have given the same rank #1 to the experimental run #17. Thus, experimental run #17 was identified for the initial measuring conditions.

Table 3.Ranking of the MCDM methods.

Exp. runWSMRankWPMRankWASPASRankTOPSISRank
10.227744210.120681180.174213180.57484613
20.219258220.125449160.172354210.60340610
30.228526200.104019240.166273220.49634223
40.296157120.125645150.210901150.56066815
50.299051100.129158140.214105140.5768512
60.234767180.11318210.173974190.53258220
70.166818240.094493250.130656240.43487725
80.234369190.112449220.173409200.5392818
90.35090940.145098120.24800490.612699
100.31144890.154813100.233131100.6513667
110.29261130.17359390.233102110.730486
120.277104150.18058570.228845120.7405475
130.288278140.14924110.218759130.6356288
140.242493160.121983170.182238160.56872414
150.36440720.139658130.25203380.53778719
160.206334230.116213200.161274230.5502917
170.60200110.26492810.43346510.7807871
180.235825170.116706190.176266170.55507116
190.298095110.2120350.25506370.7514334
200.164287260.086508270.125398270.347526
210.34131560.21683540.27907540.51177521
220.36318430.17947880.27133160.59328411
230.34003770.24899530.29451620.777482
240.15201270.107484230.129747250.49417724
250.34854950.20476260.27665650.50682222
260.31856180.25522420.28689330.7710083
270.164828250.087748260.126288260.33809827

5.Results and discussion

As already pointed out, the traditional method of calibrating a camera makes use of a calibration piece for which the camera is calibrated, and measurements are thereafter performed based on that calibrated value. Calibration errors are not effectively handled by this method, though. An innovative CLF method is used to solve this issue successfully. The results of the CLF method were compared with those of the traditional approach in order to determine the efficacy of the CLF method. For the traditional method, equation (1) was used to calibrate slip gauge SA to determine the calibrated value of one pixel, and the calibrated value of slip gauge SA was then used to measure the other slip gauges. The percentage errors calculated in a traditional method using equation (2) were further analyzed by using two statistical methods of forecasting, such as SiA and EMA. SiA values were calculated by simply taking the mean values of three percentage errors for each experimental run, while EMA values were calculated by using equation (3). Table 4 shows the comparative results of the 'traditional method' with the 'CLF method'.

Table 4.Comparative results of traditional method with CLF method.

Percentage errors in a traditional method (%)CLF (%)Percentage error reduction in CLF method (%)
Exp. runSBSCSDSiAEMASiAEMASiAEMA
12.3001101.4786420.9531221.5772911.4212490.9309280.6254863−40.98−55.99
21.7926731.1106780.4975841.1336450.9746300.7951970.6140121−29.85−37.00
32.6689732.1116591.3684622.0496981.8793891.1202790.7343148−45.34−60.93
42.1838631.9027071.2065911.7643871.6249380.9475210.6243489−46.30−61.58
52.1711311.7967601.1649751.7109551.5744600.9225290.601938−46.08−61.77
62.0334931.9696121.1012591.7014551.5514060.9551580.6758217−43.86−56.44
73.2751761.7447201.1430672.0543211.8265071.2728400.8612506−38.04−52.85
82.3316541.8532561.1572751.7807281.6248650.9959860.6686078−44.07−58.85
91.9315951.6744221.0754091.5604751.4392080.8338230.5450724−46.57−62.13
101.7306250.3950340.4966140.8740910.7797220.7243160.4887246−17.13−37.32
111.4075500.5998080.9980811.0018131.0008800.6065140.4520937−39.46−54.83
121.3207971.0073881.1685701.1655851.1663310.5861460.4621593−49.71−60.37
132.0879940.8389261.1928411.3732541.3281510.8224870.5238058−40.11−60.56
141.4157010.0000000.8594590.7583870.7836550.7627530.62166870.58−20.67
150.6074411.0250570.5011390.7112120.6586940.7096080.6249089−0.23−5.13
162.4303231.0367891.5363211.6678111.6349390.9890670.6725848−40.70−58.86
170.7421750.0968050.3291380.3893730.3743140.3903280.2945410.25−21.31
182.0188430.4374161.1036341.1866311.1658820.9031110.6334881−23.89−45.66
190.3455170.7364980.8837970.6552710.7124020.4270300.4508992−34.83−36.71
200.2544531.2977101.3740460.9754031.0750640.9813870.99051810.61−7.86
211.1686991.5243901.9207321.5379401.6336380.7744930.8344375−49.64−48.92
221.9906821.2071161.7026681.6334891.6507840.8390730.6623592−48.63−59.88
230.7418780.4604760.9363010.7128850.7687390.4080880.417959−42.76−45.63
241.9360271.6414142.0707071.8827161.9297140.9502240.8432841−49.53−56.30
251.2626261.9570711.7676771.6624581.6887630.8378150.7985423−49.60−52.71
260.7461330.7188351.0300270.8316650.8812560.4174760.4281298−49.80−51.42
270.3623191.0326091.5434780.9794691.1204710.9849781.06663110.56−4.81

In table 4, the reductions in CLF values concerning the values of the traditional method were determined by using equation (5).

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (10)

Reducing the percentage of errors was the study's goal. As a result, table 4 shows that the SiA and EMA values for the CLF method are significantly lower than those for the traditional method, which are also shown in figures 6 and 7. Thus, the comparative analysis demonstrates that the proposed 'CLF method' outperforms the conventional method.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (11)

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (12)

The EMA method, which can be seen in figures 8 and 9, estimates the percentage errors in a way that is well-minimized, which is why it has the potential to be used instead of the SiA and SMA methods in the CLF method. The SMA values were calculated based on the average of the previous two values, resulting in three SMA values in the first iteration, two SMA values in the second, and finally just one SMA value for each experimental run after the third iteration, which was used as the final SMA value for analysis.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (13)

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (14)

From figure 9, it was also observed that the EMA method shows a well-distributed dome-shaped curve with a narrow range distribution of the percentage errors as compared with the SiA and SMA methods. The EMA method has an overall 'mean percentage error' value of 0.6377%, followed by a 'mean percentage error' value of 0.7844% by the SMA method and a 'mean percentage error' value of 0.8107% by the SiA method. The 'Analysis of Variance (ANOVA)' is a powerful tool that explains the relationship of the input factors with the respective output responses. 'ANOVA' also describes how much the percentage of the input factor influences varying the value of the output response. 'ANOVA' was applied to the millimeter values of the four slip gauges (see table 2), which are shown in table 5.

Table 5.Results of the ANOVA method applied on four slip gauges millimeter values in the CLF method.

SASBSCSD
Input factorsP-ValuePCP-ValuePCP-ValuePCP-ValuePCAverage PC
D 0.000 88.97 0.000 80.39 0.000 67.33 0.001 60.0574.18
C0.9870.010.1374.61 0.030 12.970.3513.655.31
M0.5180.860.6750.740.5751.380.6531.371.09
D×C0.6681.460.5143.180.7841.990.9890.431.77
D×M0.4772.310.5033.260.8101.810.21711.134.63
C×M0.6311.600.9440.630.4075.250.21611.174.66

Where, bold values indicate the influence of input factors. PC = Percentage contribution (%).

From table 5, it was seen that the smartphone camera distance was found to be the most influencing input factor, having an average percentage contribution of 74.18% in a slip gauge measurement. This is because when the 'distance between the smartphone camera and the slip gauge' increases, the scale of the slip gauge image decreases accordingly. Due to this, the pixel value for each slip gauge decreases accordingly. The light source (RGB) was found to be the least significant input factor, having a percentage contribution of 5.31%. The slip gauges have a sharp edge that is much less influenced by the effects of the RGB light sources. Also, due to the sharpness of the edges, very few or no variations were observed for the number of measurements. Alternately, it was also deduced that the camera distance suppressed the effect of the RGB light source and a number of measurements by causing the variation in pixel values. However, the combination of the RGB light source having a monochromatic nature and the slip gauges having sharp edges helped to reduce the image processing steps. This shows the usefulness of the proposed experimental setup.

From table 3, it is evident that all four methods indicated that experimental run #17 was the best alternative and assigned it rank #1. Experimental run #17 had initial measuring conditions: smartphone camera distance at 600 mm, blue light source, and five numbers of measurements. All four methods gave the same choice for experimental run #17, which shows strong validation to determine the initial measuring conditions. From equation (4), it is clear that the WASPAS method represents the combined effect of the WSM and WPM. Therefore, the Taguchi analysis was performed for the WASPAS method and TOPSIS method only in Minitab software to determine the optimal measuring conditions.

Rank #1 is assigned to the higher values of the performance scores of the WASPAS method and the closeness coefficients of the TOPSIS method due to which Taguchi analysis of WASPAS and TOPSIS was performed for the 'larger-is-better' Taguchi design philosophy. The main effects plots for the WASPAS method and TOPSIS method are shown in figures 10 and 11.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (15)

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (16)

From figure 10, the optimal measuring conditions for the WASPAS method were determined using Taguchi analysis, which included smartphone camera distance at 600 mm, a blue light source, and five numbers of measurements at experimental run #17. It shows that the Taguchi analysis of the WASPAS method showed the same optimal conditions as the initial measuring conditions. From figure 11, the optimal measuring conditions for the TOPSIS method using Taguchi analysis were determined as smartphone camera distance at 600 mm, a red light source, and five numbers of measurements at experimental run #11. To confirm the optimal measuring conditions, the experimental results of the initial measuring conditions (experimental run #17) were compared with the optimal measuring conditions (experimental run #11). From table 2 and figure 8, it was observed that the experimental results at experimental run #17 show better results than the experimental results at experimental run #11. Thus, initial measuring conditions were confirmed as the final optimal measuring conditions at experimental run #17.

To validate the final optimal measuring conditions, a validation experiment was performed, for which the percentage errors were calculated. Table 6 shows the measurement results of the slip gauges in the validation experiment. In the validation experiment, each slip gauge was calibrated using equation (1), and using EMA, the final calibrated value for one pixel was determined to be 0.2038 mm. Table 7 shows the percentage errors and estimated EMA value in the validation experiment. From the final optimal conditions, a validation experiment was performed, which was the smartphone camera distance from the slip gauge at 600 mm under blue light, and five number of measurements were chosen from experimental run #17.

Table 6.Measurement of four slip gauges in the validation experiment.

Pixel values of four slip gaugesMillimeter values of four slip gauges
Sr No.SASBSCSDSASBSCSD
110114619724520.583829.754840.148649.9310
210014719724520.380029.958640.148649.9310
39814519824719.972429.551040.352450.3386
49914419624720.176229.347239.944850.3386
59914719624620.176229.958639.944850.1348
Average 99.4 145.8 196.8 246 20.2577 29.7140 40.1078 50.1348

Table 7.Estimated percentage errors in a validation experiment.

Percentage errors (%)
Sr No.SASBSCSDEstimated EMA
11.34380.95320.38000.3800 0.5721

The average values from table 6 were used to calculate the percentage errors using equation (2). The EMA method was applied to the average percentage errors of the validation experiment, and the final percentage error value was estimated. From table 7, the final estimated value of the percentage error was determined as 0.5721%, which is approximately less than 1%. Thus, validation experiment results show that the proposed 'CLF method' is found to be effective in camera calibration and measurement.

6.Applicability of the proposed method

6.1.Estimation of the camera lens distortion errors

Any captured image is represented in two-dimensional image coordinates during image processing. When a camera captures images, it generates distortions along the 'x' and 'y' axes. This is caused by a problem known as camera lens distortion error, which occurs when the camera lens affects image acquisition. The proposed study estimated slip gauge measurement errors vertically. However, the proposed approach is also applicable for estimating camera lens distortion errors in the horizontal direction.

6.2.Camera calibration

The slip gauge-based camera calibration system proposed in this study was implemented. The proposed method can also be applied in combination with alternative calibration techniques, such as the utilization of checkerboards, calibration pieces, calibration grids, grids of circles, retroreflective targets, and so on. The proposed method has the capability to accurately determine the errors in camera calibration as a result of its statistical nature. Both on-machine and dedicated machine vision systems can derive advantages from such a scenario.

6.3.Distance and length measurement of measuring object

The slip gauges were used to estimate the pixel error using the proposed CLF method. In this case, the distance between the camera lens and the object being measured remained constant. A set of slip gauges can be positioned at a different distance from the camera lens to measure the distance between the camera lens and the measuring object. The values of the measured pixels vary depending on the distance. It is possible to calculate the distance between the camera lens and the measuring object using the inverse proportional relationship. To better comprehend this, consider the following two cases from figure 12, which illustrate the concept of distance and length measurement in machine vision systems.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (17)

In figure 12, 'a1' and 'a2' represent the slip gauge's pixel lengths, and 'b1' and 'b2' represent the separations between the camera lens and the slip gauge. When the slip gauge's position changes, the values of 'a1' and 'b1' also change to 'a2' and 'b2', respectively. It is seen that the pixel length of the slip gauge decreases as the distance between the camera lens and the slip gauge increases, indicating the inverse relationship that is expressed by equation (6).

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (18)

In equation (6), 'a' stands for the slip gauge's pixel length, 'k' for the proportionality constant, and 'b' for the separation between the camera lens and the slip gauge. Therefore, using figure 12 and equation (6), it is possible to define the final relationship between the lengths of the slip gauge pixels and the separations between the camera lens and slip gauge, which is represented by equation (7).

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (19)

It is simple to measure the slip gauge length or the camera lens distance by using equation (7). In figure 12, the measurement of the object can be carried out quickly and simply by combining the measuring object and the slip gauge. As previously covered in detail, the proposed CLF method performs better than the traditional calibration method, which increases measurement accuracy by decreasing pixel measurement errors. As a result, measuring an object's distance and length can be performed well by employing the proposed CLF method.

6.4.On-machine measurement system

The on-machine measurement system based on machine vision is one of the measurement methods. This involves inspecting the machined component on the machining center itself. Cutting down on the time required for quality inspections has the greater benefit of increasing the manufacturing system's productivity. However, in the on-machine measurement, it becomes challenging to maintain the distance between the camera lens and the machined component due to the complexity of the camera calibration process. Additionally, the noise produced during image acquisition as a result of various uncertainty factors, such as temperature, vibration, light intensity, etc, reduces the accuracy of the on-machine measurement system. The proposed CLF method, which was found to be reliable, accurate, and effective for estimating errors in image measurement, uses RGB light sources with slip gauges. These benefits are important for putting the suggested method for the on-machine measurement system into practice. The proposed method is also regarded as cost-effective because of its simplicity.

7.Conclusions

The objective of this research was to develop a robust statistical technique for calibrating and measuring cameras, with the requirement that errors are minimized through the use of diverse MCDM methods. As primary gauges, slip gauges are utilized in the field of metrology. Therefore, slip gauges were employed to conduct the measurements in this study. Four MCDM methods were employed to optimize the camera calibration errors: WSM, WPM, WASPAS, and TOPSIS. Following an experimental investigation, the subsequent deductions are made:

  • It was determined that the proposed CLF method is more efficient than the traditional camera calibration method in addressing calibration errors. One drawback of the CLF method is that it uses four slip gauges; however, this adds to the robustness and uniqueness of the calibration system by enabling each slip gauge to serve as both a lead and a follower for calibration and measurement. The experimental outcomes demonstrate that the EMA method incorporated into the CLF method yields a significantly reduced percentage error in comparison to traditional camera calibration techniques.
  • The outcomes of the WASPAS methods are supported by the WSM and WPM methods, indicating that the three methods are relevant to one another. This provides substantial evidence in favor of the selection of the optimal conditions that are desired. In comparison to TOPSIS, the WASPAS method exhibits superior performance when applied to output responses that contain unit dimensions.
  • The primary determinant in the variation of percentage errors in slip gauge measurement is the distance between the smartphone camera and the slip gauge. This distance accounts for an average percentage contribution of over 74.18%. Future work that fixes the distance between the smartphone camera and the slip gauge system will increase its robustness and accuracy. The effectiveness of the slip gauges as a calibration piece is demonstrated by the fact that the sharp edges of the slip gauges suppress the effect of the light sources on an image. The integration of the slip gauges and the RGB light source significantly reduces the number of image processing stages, thereby representing the simplicity of the camera calibration system.
  • As shown in table 4, the implementation of the CLF method led to an average percentage error reduction of around 34% for SiA values and 46% for EMA values, in comparison to the traditional calibration method. As depicted in figure 9, the histogram analysis reveals that the CLF-EMA method exhibited the least amount of percentage error in comparison to the CLF-SiA and CLF-SMA methods. It was determined that the mean percentage errors for CLF-SiA, CLF-SMA, and CLF-EMA were 0.8107%, 0.7844%, and 0.6377% respectively.
  • In a validation experiment employing the EMA method, the estimated percentage error of camera calibration was 0.5721%. This value indicates that the proposed CLF method is more efficient in handling camera lens distortion errors. As a result, the CLF method must be investigated under a variety of machine vision system circumstances in future research.
  • Future studies may incorporate the proposed method for vision-based on-machine measurement systems.

Acknowledgments

The authors express their gratitude to 'Dr. Babasaheb Ambedkar Technological University, Lonere, Maharashtra, India' for providing funding support through the 'TEQIP III' fellowship to conduct this research. This experiment was conducted at the 'Centre for Advanced Machining Technology (CAMT)', a facility of the 'Mechanical Engineering Department' at 'Dr. Babasaheb Ambedkar Technological University'.

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary files).

Conflict of interest

There is no conflict of interest from the author's side.

Exploring machine vision measurements through innovative cyclic-lead-follower statistical technique: an experimental study (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Amb. Frankie Simonis

Last Updated:

Views: 5377

Rating: 4.6 / 5 (76 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Amb. Frankie Simonis

Birthday: 1998-02-19

Address: 64841 Delmar Isle, North Wiley, OR 74073

Phone: +17844167847676

Job: Forward IT Agent

Hobby: LARPing, Kitesurfing, Sewing, Digital arts, Sand art, Gardening, Dance

Introduction: My name is Amb. Frankie Simonis, I am a hilarious, enchanting, energetic, cooperative, innocent, cute, joyous person who loves writing and wants to share my knowledge and understanding with you.