

If the camera and the scene are still, the images are likely to be well-aligned. If not, you may want to run cross-correlation first, to find the best alignment first. If they are taken with the same settings and the same device, they are probably the same. PIL library will help to do it in Python. If not, you may need to resize or crop them. QuestionsĪre images of the same shape and dimension? However, there are some decisions to make first. Calculate distance between feature vectors rather than images. Calculate some feature vector for each of them (like a histogram). Overlap Windows: Measurement in pixels, when processed in parts.Option 1: Load both images as arrays ( ) and calculate an element-wise (pixel-by-pixel) difference. Window Height: Measure in pixels of the height of the frame, when processing by parts. Window Width: Measure in pixels of the width of the frame, when processing by parts. Process by parts: This option allows processing the image by parts, intended for very large images. Thus a poorly focused image will have a lower Laplacian value than one in which the edges are clearly distinguishable. The method we use to determine the quality of an image is the Laplacian operator, which allows us to obtain a mathematical parameter of image sharpness by studying the edges of objects.

If the value of this parameter is 0, the threshold is not applied. The default value is 100, it is appropriate to perform several tests to determine which value best suits the type of images evaluated. This value allows you to filter out-of-focus images. Laplacian threshold: Quality threshold for an image to be processed.Merge intersection: Percentage of intersection by which the detections will be merged, more details in the merge section.Merge type: type of merging of overlapping objects of the same class, more details in the merge section.This is used to avoid marking the detection at the limit of the object and to give a margin. The percentage is calculated based on the height of the detection. Buffer height: Percentage of vertical magnification over the detection made with AI.The percentage is calculated as a function of the detection width. Buffer width: Percentage of horizontal enlargement on the detection performed with AI.Stroke size: The thickness in pixels of the detection contour.


