[go: up one dir, main page]

CN118397308A - Page similarity detection method and device, electronic equipment and storage medium - Google Patents

Page similarity detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118397308A
CN118397308A CN202410125036.6A CN202410125036A CN118397308A CN 118397308 A CN118397308 A CN 118397308A CN 202410125036 A CN202410125036 A CN 202410125036A CN 118397308 A CN118397308 A CN 118397308A
Authority
CN
China
Prior art keywords
image
tested
detected
difference
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410125036.6A
Other languages
Chinese (zh)
Inventor
舒伟
郭曼丽
叶成博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Jitian Network Technology Co ltd
Original Assignee
Guangzhou Jitian Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Jitian Network Technology Co ltd filed Critical Guangzhou Jitian Network Technology Co ltd
Priority to CN202410125036.6A priority Critical patent/CN118397308A/en
Publication of CN118397308A publication Critical patent/CN118397308A/en
Pending legal-status Critical Current

Links

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The invention provides a page similarity detection method, a device, electronic equipment and a storage medium, wherein the method comprises the steps of running an application program to be detected and displaying an application interface to be detected; obtaining a to-be-detected image containing at least part of the to-be-detected application interface based on the to-be-detected application interface; acquiring a reference image; respectively extracting pixels of an image to be detected and a reference image to be detected, and correspondingly obtaining a pixel group to be detected and a reference pixel group; and calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result. According to the invention, the pixel values of the image to be detected and the reference image are extracted, and the pixel values are calculated to replace manual judgment of small differences, so that the detection precision is improved, the conditions of missed detection and incorrect judgment are reduced, and the judgment result is more accurate and reliable.

Description

Page similarity detection method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of user interface detection technologies, and in particular, to a method and apparatus for detecting page similarity, an electronic device, and a storage medium.
Background
With the increasing functions of Application programs (APP), the code amount and the code complexity of the whole APP are continuously increased, and any minor modification to the original code may lead to a whole body and affect the arrangement of unexpected pages, so as to cause errors of a User Interface (UI), such as an arrangement error, a display error or a logic error. Thus, each release requires a UI designer and tester to perform UI regression.
Most of the existing UI regression relies on manual inspection, and because human eyes are difficult to capture small differences, such as position deviation, color difference and the like, the inspection accuracy is often not high enough, and missing or wrong judgment is easy to cause.
Disclosure of Invention
Based on the foregoing, it is necessary to provide a method, an apparatus, an electronic device, and a storage medium for detecting page similarity.
A page similarity detection method comprises the following steps:
running an application program to be tested and displaying an application interface to be tested;
obtaining a to-be-tested image containing at least part of the to-be-tested application interface based on the to-be-tested application interface;
Acquiring a reference image;
respectively extracting pixels of the image to be detected and the reference image, and correspondingly obtaining a pixel group to be detected and a reference pixel group;
and calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
In one embodiment, the step of running the application program under test and displaying the application interface under test includes:
acquiring a preset jump sequence;
And running the application program to be tested, and sequentially triggering corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested and display the application interface to be tested.
In one embodiment, the step of obtaining the image to be tested including at least part of the application interface to be tested based on the application interface to be tested includes:
Obtaining a screenshot to be tested for the screenshot of the application interface to be tested;
Acquiring the length and the width of a cutting area;
acquiring an anchoring coordinate of a cutting area;
And cutting the image of the screenshot to be detected based on the anchoring coordinates and the length and the width of the cutting area to obtain the image to be detected containing at least part of the application interface to be detected.
In one embodiment, the calculating the difference between the pixel group to be detected and the reference pixel group to obtain the similarity detection result includes:
detecting whether the sum of the differences of the reference pixel array and the pixel array to be detected is greater than or equal to a threshold value;
When the sum of the differences is greater than or equal to the threshold value, determining the outline of a difference point of the image to be detected and the reference image;
drawing a difference graph according to the difference point outline;
and generating the similarity detection result according to the difference graph.
In one embodiment, the step of determining the difference point profile of the image to be measured and the reference image when the sum of the differences is greater than or equal to the threshold value includes:
Converting the reference image into a first gray level image, and converting the image to be detected into a second gray level image;
respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image;
and determining the difference point outline of the standard binarized image and the reference binarized image.
In one embodiment, the step of drawing the difference map from the difference point profile includes:
drawing the difference graph on the image to be detected;
And/or
And drawing the difference graph on the reference image.
In one embodiment, the step of running the application program under test and displaying the application interface under test further includes:
Acquiring a target URL;
And sending a downloading request to a server according to the target URL, downloading the application program to be tested from the server and installing the application program.
A page similarity detection apparatus, comprising:
the display module is used for running the application program to be tested and displaying the application interface to be tested;
The first acquisition module is used for acquiring a to-be-detected image containing at least part of the to-be-detected application interface based on the to-be-detected application interface;
the second acquisition module is used for acquiring a reference image;
the pixel extraction module is used for respectively extracting pixels of the image to be detected and the reference image, and correspondingly obtaining a pixel group to be detected and a reference pixel group;
and the calculation module is used for calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the page similarity detection method described in any of the above embodiments when executing the computer program.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the page similarity detection method described in any of the above embodiments.
According to the page similarity detection method, the device, the electronic equipment and the storage medium, the pixel values of the image to be detected and the reference image are extracted, and the pixel values are calculated to replace manual judgment of small differences, so that the detection precision is improved, the conditions of missed detection and wrong judgment are reduced, and the judgment result is more accurate and reliable.
Drawings
FIG. 1 is a flow chart of a method for detecting page similarity in one embodiment;
FIG. 2 is another flow chart of a method for detecting page similarity in one embodiment;
FIG. 3 is a diagram of a comparison of screen shots before and after cutting out status bars in one embodiment;
FIGS. 4-1 to 4-3 are graphs showing differences between the detection results of the plurality of pages according to one embodiment;
FIG. 5 is a flow chart illustrating a method for detecting page similarity according to an embodiment;
FIG. 6 is a schematic block diagram of a page similarity detecting device in one embodiment;
fig. 7 is an internal structural diagram of an electronic device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Example 1
In this embodiment, as shown in fig. 1, a method for detecting page similarity is provided, including:
Step 110, running an application program to be tested, and displaying an application interface to be tested:
In this embodiment, the application program to be tested is the application program of the latest version, and the hardware device running the application program to be tested may be any one of display terminals such as a mobile phone, a computer, a tablet, or a smart watch. When the application program to be tested is a mobile phone end application program, the application program to be tested can be directly operated on the mobile phone, a simulator or virtual machine software can be operated on the computer, the environment of the corresponding mobile equipment is simulated, and the application program to be tested is operated on the computer.
Similarly, when the application program to be tested is an application program at the flat-panel end or an application program at the intelligent watch end, the virtual machine software can be operated on the computer to simulate the environment of the intelligent watch or the environment of the flat-panel, so that the application program to be tested can be directly operated on the computer, and an application interface to be tested can be displayed.
And the running process of the application program to be tested is triggered by a preset control instruction, and the action of clicking the application program to be tested by a user is simulated, so that the application program to be tested displays different interfaces. The application interface to be tested is displayed according to the actual detection requirement, and can be a specific interface or interfaces, or can be a control application program to display each interface one by one, which is not limited herein.
The step of simulating the user's clicking on the application to be tested, triggered by the preset control command, may be implemented by using a script automation tool or calling an API (Application Programming Interface ). For example, when the hardware device running the application to be tested is a mobile phone, a UI automation or Appium automation test tool may be selected.
For example, an application program to be tested is run on a mobile phone, the system of the mobile phone is an android system, and the mobile phone is connected to a test PC end through a USB. And an ADB (Android Debug Bridge, debug bridge) driver is arranged at the test PC end, and after the mobile phone starts the USB debug function, the mobile phone communicates with the test PC end through an ADB instruction.
Step 120, obtaining a to-be-tested image including at least part of the to-be-tested application interface based on the to-be-tested application interface:
based on the application interface to be tested, the specific implementation method for obtaining the image to be tested comprising at least part of the application interface to be tested comprises, but is not limited to, the following steps:
(a) Using a screenshot tool: the screen capture tool or the third-party screen capture tool of the hardware device running the application program to be tested can be used for carrying out screen capture on the interface of the application program to be tested, and then the screen capture is used as an image to be tested for testing. For example, a screenshot tool of a mobile phone or a screenshot tool of a smart watch is used, and when a simulator or a virtual machine is used, a screenshot tool of a computer can be used for screenshot an interface in the simulator or the virtual machine, and the interface is used as an image to be tested.
(B) Using a UI automation test tool: many UI automation test tools can directly capture interface elements of an application and generate corresponding image files. For example, appium may use a screenshot function on an Android or iOS device to obtain an interface for an application, on the basis of which an image to be measured is generated.
In this embodiment, the image to be measured may be a complete application interface to be measured, or may be a part of the application interface to be measured, for example, half of the application interface to be measured is used as the image to be measured, or a specific element image on the application interface to be measured is extracted as the image to be measured.
Step 130, obtaining a reference image:
the reference image is from the old version of the application program to be tested (reference application program), and the picture content and the size of the reference image are consistent with those of the image to be tested.
In one embodiment, the reference image is pre-stored on the test PC, and is directly called from the memory of the test PC when in use.
In one embodiment, a to-be-tested application program and a reference application program are simultaneously installed on a hardware device, the to-be-tested application program and the reference application program run sequentially, a to-be-tested application interface is correspondingly displayed, a to-be-tested image containing at least part of the to-be-tested application interface is obtained based on the to-be-tested application interface, a reference application interface is displayed, and a reference image containing at least part of the reference application interface is obtained based on the reference application interface.
For example, an application program to be tested and a reference application program are installed on the same mobile phone, the mobile phone is controlled to operate the application program to be tested through an automatic testing tool, an application interface to be tested is displayed, and an image to be tested is generated based on the application interface to be tested. And then, exiting the application program to be tested through the automatic testing tool, running the reference application program and obtaining the reference image.
For example, two virtual mobile phones are created at the test PC end through virtual machine software, and an application program to be tested and a reference application program are respectively operated, so that the application program to be tested and the reference application program are operated simultaneously, the test time is further saved, and the test efficiency is improved.
In one embodiment, the image to be measured includes a plurality of sub-images to be measured, and each sub-image to be measured is marked, for example, marked according to the generating sequence; the reference image includes a plurality of reference sub-images, each of which is numbered according to the same rule. The sub-images to be detected with the same sequence number and the reference sub-images are images of one comparison group, and the similarity of the sub-images to be detected of the same comparison group and the images of other comparison groups is the highest compared with the reference sub-images.
Step 140, extracting pixels of the image to be detected and the reference image respectively, and correspondingly obtaining a pixel group to be detected and a reference pixel group:
The method for extracting the pixel groups of the image to be detected and the reference image comprises the following steps:
(a) Using an image processing library: image processing libraries provided using programming languages, such as Pillow, openCV for Python, javaCV for Java, and the like. These image processing libraries provide functions and methods to read image files and convert them into arrays or matrices of pixels.
(B) Using image processing software: specialized image processing software such as Adobe Photoshop, GIMP, etc. opens the images to be measured and the reference image, and extracts pixels through tools and functions provided by the software.
(C) Using an image editor: in addition to specialized image processing software, some image editors also provide the function of extracting pixels. For example, in Microsoft Paint, a "select" tool is used to select and copy a portion of an image, which is then pasted into another image editor to obtain pixel data.
By extracting the pixel group to be detected and the reference pixel group, detailed information of image data can be obtained, further data analysis and visualization are carried out, differences which are difficult to observe by naked eyes are changed into differences of corresponding pixel values, and reliability of difference detection is ensured.
Step 150, calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result:
In this embodiment, the pixel group to be detected and the reference pixel group are automatically compared pixel by an image processing library or image processing software to obtain a difference value therebetween, and the difference value is stored in a new pixel group, thereby generating a difference pixel group. And then, further processing, analyzing or displaying the difference pixel group according to the requirement, and finally obtaining a detection result.
When the detection result shows that the difference exists, the user can further focus on the picture group to observe whether the difference part is normal or not. If the pictures are not different, the UI automatic detection result is normal.
In summary, in this embodiment, interaction between a user and an application to be tested is simulated by an automated testing tool, so that the degree of automation of testing is improved, manpower is saved, and the time utilization rate is improved; meanwhile, the pixel groups of the image to be detected and the reference image are extracted for automatic comparison, so that the conditions of manual omission and misjudgment are avoided, and the accuracy and reliability of the detection result are improved.
In one embodiment, the step of running the application program under test and displaying the application interface under test further includes:
Acquiring a target URL;
And sending a downloading request to a server according to the target URL, downloading the application program to be tested from the server and installing the application program.
The URL (Uniform Resource Locator), uniform resource locator, is used to identify and locate resources on the internet. By parsing the URL, the browser may send a request to the server and obtain the corresponding resource for display or processing. The target URL in this embodiment is the URL address where the application to be tested is located, and determines from which server the application to be tested is downloaded.
And sending a downloading request to the server according to the target URL, and sending an installation package of the application program to be tested to the test PC side after the server receives the downloading request. And then, according to the test requirement, the application program to be tested is installed on the local test PC end or a mobile hardware end connected with the test PC end, such as a mobile phone or a smart watch.
After the application program to be tested is installed, the application interface to be tested can be displayed by running the application program to be tested.
In one embodiment, the step of running the application program under test and displaying the application interface under test includes:
acquiring a preset jump sequence;
And running the application program to be tested, and sequentially triggering corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested and display the application interface to be tested.
In this embodiment, the preset jump sequence is the jump sequence of the application interface of the application program to be tested, for example, jump from the homepage to the user information page, for example, jump from the homepage to the interactive page, jump from the interactive page to another user main information page. Through the preset jump sequence, the application program to be tested can jump to each interface to be tested, namely the application interface to be tested, from the homepage.
In this embodiment, based on a preset jump sequence, an automatic test tool or a mode of running a preset script or a preset code is used to control an application program to be tested to trigger elements in the application program to be tested according to the preset jump sequence, so that interface jump is realized, and the effect of testing more rapidly and improving test efficiency is achieved.
In one embodiment, a preset jump sequence records a test sequence of each test interface of the application program to be tested and a click coordinate of each test interface, and in this embodiment, a click command is triggered according to the click coordinate at each application interface of the application program to be tested based on the preset jump sequence, and the jump of the application interface is performed in response to the click command.
In this embodiment, executing the preset jump sequence may simulate the user clicking the element on the application program to be tested to perform the page jump, where the click coordinate is the coordinate of the element to be tested on the test interface, for example, the preset jump sequence records the test sequence of the homepage-room page-friend making page, the coordinate of the "room" element of the homepage and the coordinate of the "friend making" element of the room page, so that in the test process, a first image to be tested is generated on the homepage, a click command based on the "room" element on the homepage, a jump is made to the room page, a second image to be tested is generated on the room page, a click command based on the "friend making" element of the room page is made to the friend making page, and a third image to be tested is generated on the friend making page. Compared with direct jump, the page jump by the preset jump sequence simulated click command can simulate the process of clicking the application program to be tested by the user more accurately, and the error of the application program to be tested can be found better.
In one embodiment, the step of obtaining the image to be tested including at least part of the application interface to be tested based on the application interface to be tested includes:
Obtaining a screenshot to be tested for the screenshot of the application interface to be tested;
Acquiring the length and the width of a cutting area;
acquiring an anchoring coordinate of a cutting area;
And cutting the image of the screenshot to be detected based on the anchoring coordinates and the length and the width of the cutting area to obtain the image to be detected containing at least part of the application interface to be detected.
Through preset anchor coordinates and cutting area coordinates, accurate screenshot of an application interface to be tested can be realized, interference of irrelevant parts is eliminated, only the parts to be tested are reserved, testing is completed in a shorter time, and testing efficiency is improved.
In one embodiment, the calculating the difference between the pixel group to be detected and the reference pixel group to obtain the similarity detection result includes:
detecting whether the sum of the differences of the reference pixel array and the pixel array to be detected is greater than or equal to a threshold value;
When the sum of the differences is greater than or equal to the threshold value, determining the outline of a difference point of the image to be detected and the reference image;
drawing a difference graph according to the difference point outline;
and generating the similarity detection result according to the difference graph.
In this embodiment, the difference between the pixel group to be measured and the reference pixel group is calculated, so that the difference between the image to be measured and the reference image can be known, and whether the image to be measured accords with the expectation or not can be judged, which is helpful for the user to quickly find potential problems and errors.
By setting the threshold value of the difference sum, when the difference sum exceeds the threshold value, the obvious difference between the image to be tested and the reference image can be determined, further analysis and processing are needed, and the testing efficiency and accuracy are improved, and the quality of an application program is improved and optimized. The difference between the image to be measured and the reference image can be better understood by determining the outline of the difference point; by drawing the difference graph, the difference points can be visualized, the difference between the image to be measured and the reference image can be displayed more intuitively, and the user can be helped to further analyze, locate and process the differences.
By combining the difference map with other test results, complete test results can be generated, providing detailed test reports and recommendations to the tester to help improve and optimize the quality of the application.
In one embodiment, the step of determining the difference point profile of the image to be measured and the reference image when the sum of the differences is greater than or equal to the threshold value includes:
Converting the reference image into a first gray level image, and converting the image to be detected into a second gray level image;
respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image;
and determining the difference point outline of the standard binarized image and the reference binarized image.
The gray level image only contains brightness information, and color is not considered, in this embodiment, converting the reference image and the image to be measured into the gray level image can simplify complexity of image processing, so that differences in the images can be analyzed more intensively, and subsequent operations are more efficient.
The pixels in the image can be divided into two categories, namely black and white, by binarizing the gray image, so that the difference points in the image are highlighted, and the contours and edges in the image are more clearly represented, so that the images are easier to observe and analyze.
By comparing the standard binarized image and the reference binarized image, the difference point therebetween can be determined. The outline of the difference points represents the boundaries of the different parts of the image and can be used to further analyze and locate differences in the image. By determining the outline of the difference point, a user can more accurately know the difference between the image to be detected and the reference image, and corresponding processing and improvement are performed.
In one embodiment, the step of drawing the difference map from the difference point profile includes:
drawing the difference graph on the image to be detected;
And/or
And drawing the difference graph on the reference image.
In one embodiment, a difference profile is drawn on an image to be measured to generate a difference map;
In one embodiment, a difference contour is drawn on a reference image to generate a difference map;
in other embodiments, the difference contours are drawn on both the image to be measured and the reference image, and two difference maps are generated for comparison so that the image differences can be seen more intuitively.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
Example two
In this embodiment, as shown in fig. 2, a method for detecting page similarity is provided, including:
Step 200, acquiring a task to be tested, analyzing the task to be tested, acquiring an application program to be tested, detecting whether update information exists in the application program to be tested, and acquiring first version information and second version information of the application program to be tested when the update information exists in the application program to be tested. The first version information is new version information, and the second version information is old version information.
Step 210, according to the first version information, obtaining the target URL:
The target URL may be obtained by a developer providing the target URL to a tester, or may be obtained from an application store or other channel.
Step 220, sending a download request to a server according to the target URL, downloading the application program to be tested from the server, and installing:
in this embodiment, the downloading and installing processes of the application program to be tested also implement an automatic mode, and no manual operation is required by the user.
Specifically, if the mobile phone application program needs to be tested, a target URL may be requested in Python and an installation package of the application program to be tested may be downloaded to the test PC, and then the installation package is installed on the mobile phone through an ADB instruction.
Step 230, obtaining a second URL according to the second version information, sending a download request to a server according to the second URL, downloading the reference application program from the server, and installing the reference application program.
In this embodiment, the reference application is an old version of application or an application called old version or last version.
In this embodiment, after the system installs the application program to be tested, the system downloads the installation package of the reference application program to the test PC end, and installs the installation package of the reference application program to the mobile phone.
The method of the embodiment further improves the degree of automation of the test and accelerates the overall speed of the test.
Step 240, running the application program to be tested, and displaying the application interface to be tested:
In this embodiment, the behavior that a user clicks on elements in turn is simulated by presetting a jump sequence, so that an application program to be tested can reach any designated application interface to be tested, and the implementation steps include:
step 241, obtaining a preset jump sequence;
And step 242, running the application program to be tested, and sequentially triggering corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested, and displaying the application interface to be tested.
In this embodiment, the specific manner of acquiring the preset jump sequence is as follows:
click_sequence=['element1','element2','element3']
Where click_sequence is a list containing elements that need to be clicked sequentially, elements 1 through 3 represent identifiers of the first through third elements that need to be clicked sequentially.
Next, the application under test is started by an ADB command, for example:
adb shell am start-n com.example.app/com.example.app.MainActivity;
By this command will launch MAINACTIVITY activities in the application under test named com.
After the application program is started through the ADB command, other ADB commands can be used for simulating the operation of a user, so that the page switching of the APP is realized, and the method is concretely as follows:
adb shell input keyevent KEYCODE _HOME: the Home key of the device is simulated to be pressed back to the Home screen.
Adb shell input keyevent KEYCODE _back: the return key of the device is simulated to be pressed, and the last page is returned.
The key event can trigger page switching behavior in the application program, so that the jump to the display of the application interface to be tested is realized.
In one embodiment, when the mobile phone is an iOS system, the Appium + WebDriver protocol is used to implement automatic running of the application under test. Appium is a mobile application automation test tool that supports communication with real iOS devices via WebDriver protocol. And controlling the application program to be tested on the iOS equipment by using the API of Appium through a preset test script, such as starting the application program, clicking a button, inputting a text and the like, so that the mobile phone displays an application interface to be tested.
Step 250, obtaining a to-be-tested image including at least part of the to-be-tested application interface based on the to-be-tested application interface:
in this embodiment, the implementation step of step 250 includes:
step 251, capturing a screenshot of the application interface to be tested to obtain a screenshot to be tested;
Step 252, obtaining the length and width of the cutting area;
step 253, obtaining the anchoring coordinates of the cutting area;
And step 254, performing image cutting on the screenshot to be detected based on the anchoring coordinates and the length and width of the cutting area to obtain the image to be detected including at least part of the application interface to be detected.
In this embodiment, the entire screen of the application interface to be tested is first displayed for screenshot by screenshot function-screenshot in the uiAutomator2 library of python. The obtained screenshot includes some content irrelevant to the UI design, such as a status bar at the top of the mobile phone, and interference pixels are removed by further cutting the screenshot, so that the pixel comparison speed and accuracy can be further improved.
As shown in fig. 3, the original mobile phone screen shot and the image to be measured after cutting out the status bar are shown.
In this embodiment, the method for capturing a specified rectangular area of a screen is as follows:
a: the upper left and lower right corner coordinates of the rectangular region of the screenshot are determined, for example:
left_top= (100, 200) # upper left corner coordinates, right_bottom= (500, 600) # lower right corner coordinates;
B: calculating the width and the height of the rectangular area according to the coordinates of the upper left corner and the lower right corner of the rectangular area;
C: the left upper corner coordinate left_top is taken as the anchoring coordinate of the cutting area (the starting point of the cutting area)
D: and capturing images according to the width and the height of the rectangular area to obtain an image to be detected.
In one embodiment, after the step of obtaining the entire screen shot of the application interface to be tested, the specific element is shot, which is specifically as follows:
Clicking the element by using d.click simulation, and triggering an application program to display the element;
Then, acquiring screenshot data by using d.screen, and determining the position and the size of the element on the screen through bounds attributes of the element;
and finally, cutting out an element image in the screenshot according to the position and the size of the element, wherein the element image is the image to be detected.
After the image to be measured is obtained, the image is stored in a test PC end, so that the image can be quickly called when the image is processed later.
According to the screenshot method, after the screenshot area and the anchoring coordinates are preset, the screenshot of the application interface to be tested is automatically performed through the program, so that the screenshot to be tested and the image to be tested are obtained, manual screenshot operation is replaced, and the image acquisition process is more convenient and quicker.
Step 260, obtaining a reference image:
in this embodiment, the method for obtaining the reference image is similar to step 130 in embodiment 1, and is not repeated here.
Step 270, extracting pixels of the image to be detected and the reference image respectively, and correspondingly obtaining a pixel group to be detected and a reference pixel group:
In this embodiment, the numpy. Array instruction in NumPy libraries is used to convert two contrast images into NumPy arrays, so as to directly access and manipulate the pixel values of the images, as follows:
from PIL import Image
import numpy as np
# load image to be tested
image=Image.open('image.jpg')
# Convert to NumPy array
array 1=np.array(image1)
Similarly, 'image. Jpg' is changed to a storage path of the reference image, and a reference pixel group array 2=np. Array (image 2) of the reference image is obtained.
And step 280, calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
In this example, numpy.abs was used to calculate the difference between two comparative images, as follows:
diff=np.abs(array1-array2)
Wherein, each element of the new array diff is the absolute value of the difference value of the two images at the corresponding positions, and the larger the absolute value is, the larger the difference of the two contrast images is, the lower the similarity is.
In this embodiment, the implementation step of step 280 includes:
step 281, detecting whether the sum of the differences between the reference pixel array and the pixel array to be detected is greater than or equal to a threshold;
Step 282, determining a difference point contour of the image to be detected and the reference image when the sum of the differences is greater than or equal to the threshold value;
Step 283, drawing a difference map according to the difference point outline;
and 284, generating the similarity detection result according to the difference graph.
In this embodiment, the numpy. Abs is used to calculate the difference between two contrast images, and after obtaining an array diff=np. Abs (array 1-array 2), the absolute values of the differences of the corresponding elements in the array diff are accumulated one by one to obtain a total difference value total_diff:
total_diff=np.sum(diff)。
When the value of total_diff is larger than or equal to a preset threshold value, the difference between the image to be detected and the reference image is larger, a difference point is further highlighted by drawing a difference graph, a detection conclusion that the picture is different is generated, and the difference graph and the detection conclusion are sent to a detection result.
When the value of total_diff is smaller than a preset threshold value, generating a detection conclusion that the picture is not different, and sending the detection conclusion to a detection result.
As shown in fig. 4-1 to 4-3, the last output detection result is shown. The detection result comprises an image to be detected, a reference image, a difference image and a detection conclusion, so that a user can more clearly and intuitively see the difference contrast effect. Common discrepancy situations include UI element position offset, UI element missing or UI element misuse, etc.
In this embodiment, the implementation steps of step 282 include:
Step 2821, converting the reference image into a first gray level image, and converting the image to be measured into a second gray level image;
Step 2822, performing binarization processing on the first gray level image and the second gray level image respectively to generate a standard binarized image and a reference binarized image;
step 2823, determining a difference point contour of the standard binarized image and the reference binarized image.
Since the gray image includes only one channel of pixel values, in this embodiment, the reference image is converted into the first gray image, and the image to be measured is converted into the second gray image, so that the amount of calculation can be reduced, and the difference in the images can be more easily distinguished.
And respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image, and converting the gray level image into a black-and-white image, wherein only two pixel values, one of which represents a target object and the other of which represents a background, are adopted, so that the difference profile is highlighted, and the subsequent difference analysis is convenient.
The specific conversion process can be completed by the following instructions:
(1) Converting the picture into a gray file by an opencv-python cv2.CvtColor instruction;
(2) Obtaining a binary image through a cv2.threshold instruction;
(3) Finding a difference point profile by a cv2.findcontours instruction;
(4) The difference map is drawn by cv2.
In this embodiment, the drawing difference map may be drawn on the image to be measured, or may be drawn on the reference image, or may be drawn on the image to be measured and the reference image separately.
As shown in fig. 5, the whole flow chart of the page similarity detection in this embodiment is shown, and the methods of the above embodiments are combined and used, and the final detection result is output as a detection report and sent to the user for real-time viewing. The whole testing process from the installation of the application program to be tested to the acquisition of the image to be tested, and then to the subsequent difference judging and image processing processes is realized through program automation, the automatic operation is realized after the new version is packaged, the agile development mode is matched, the specific time is not occupied, the detection precision is higher, and the detection result is more accurate.
Example III
In this embodiment, as shown in fig. 6, a page similarity detecting device is provided, including:
the display module 610 is configured to run an application program to be tested and display an application interface to be tested;
A first obtaining module 620, configured to obtain, based on the application interface to be tested, an image to be tested including at least part of the application interface to be tested;
a second acquisition module 630, configured to acquire a reference image;
The pixel extraction module 640 is configured to extract pixels of the image to be detected and the reference image respectively, and correspondingly obtain a pixel group to be detected and a reference pixel group;
the calculating module 650 is configured to calculate a difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
In one embodiment, the page similarity detecting device further includes:
The third acquisition module is used for acquiring the target URL;
and the request module is used for sending a downloading request to a server according to the target URL, downloading the application program to be tested from the server and installing the application program.
In one embodiment, a display module includes:
A first acquisition unit that acquires a preset jump sequence;
And the triggering unit is used for running the application program to be tested, and sequentially triggering the corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested and display the application interface to be tested.
In one embodiment, the first acquisition module includes:
the screenshot unit is used for screenshot the application interface to be tested to obtain screenshot to be tested;
a second acquisition unit configured to acquire a length and a width of the trimming area;
A third acquisition unit for acquiring the anchor coordinates of the cutting area;
and the cutting unit is used for cutting the image of the screenshot to be detected based on the anchoring coordinates and the length and the width of the cutting area to obtain the image to be detected containing at least part of the application interface to be detected.
In one embodiment, the computing module includes:
The detection unit is used for detecting whether the sum of the differences of the reference pixel array and the pixel array to be detected is larger than or equal to a threshold value;
A contour determining unit configured to determine a difference point contour of the image to be measured and the reference image when a sum of the differences is greater than or equal to the threshold;
the drawing unit is used for drawing a difference graph according to the difference point outline;
and the generation unit is used for generating the similarity detection result according to the difference graph.
In one embodiment, the contour determination unit comprises:
the first conversion subunit is used for converting the reference image into a first gray level image and converting the image to be detected into a second gray level image;
The second conversion subunit is used for respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image;
And the determining subunit is used for determining the difference point outline of the standard binarized image and the reference binarized image.
In one embodiment, the drawing unit includes:
a first drawing subunit, configured to draw the difference map on the image to be measured;
And/or
And a second rendering subunit that renders the difference map on the reference image.
For specific limitation of the page similarity detection device, reference may be made to the limitation of the page similarity detection method hereinabove, and the description thereof will not be repeated here. The units in the page similarity detection device may be all or partially implemented by software, hardware, and a combination thereof. The units can be embedded in hardware or independent of a processor in the electronic equipment, and can also be stored in a memory in the electronic equipment in a software mode, so that the processor can call and execute the operations corresponding to the units.
Example IV
In this embodiment, an electronic device is provided. The internal structure thereof can be shown in fig. 7. The electronic device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a computer program, and is provided with a database for storing an application program installation package to be tested and processed images. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the electronic device is used to communicate with other electronic devices in which application software is deployed. The computer program is executed by a processor to implement a method of page similarity detection. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the electronic equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of:
running an application program to be tested and displaying an application interface to be tested;
obtaining a to-be-tested image containing at least part of the to-be-tested application interface based on the to-be-tested application interface;
Acquiring a reference image;
respectively extracting pixels of the image to be detected and the reference image, and correspondingly obtaining a pixel group to be detected and a reference pixel group;
and calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
In one embodiment, the processor when executing the computer program further performs the steps of:
Acquiring a target URL;
And sending a downloading request to a server according to the target URL, downloading the application program to be tested from the server and installing the application program.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a preset jump sequence;
And running the application program to be tested, and sequentially triggering corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested and display the application interface to be tested.
In one embodiment, the processor when executing the computer program further performs the steps of:
Obtaining a screenshot to be tested for the screenshot of the application interface to be tested;
Acquiring the length and the width of a cutting area;
acquiring an anchoring coordinate of a cutting area;
And cutting the image of the screenshot to be detected based on the anchoring coordinates and the length and the width of the cutting area to obtain the image to be detected containing at least part of the application interface to be detected.
In one embodiment, the processor when executing the computer program further performs the steps of:
detecting whether the sum of the differences of the reference pixel array and the pixel array to be detected is greater than or equal to a threshold value;
When the sum of the differences is greater than or equal to the threshold value, determining the outline of a difference point of the image to be detected and the reference image;
drawing a difference graph according to the difference point outline;
and generating the similarity detection result according to the difference graph.
In one embodiment, the processor when executing the computer program further performs the steps of:
Converting the reference image into a first gray level image, and converting the image to be detected into a second gray level image;
respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image;
and determining the difference point outline of the standard binarized image and the reference binarized image.
In one embodiment, the processor when executing the computer program further performs the steps of:
drawing the difference graph on the image to be detected;
And/or
And drawing the difference graph on the reference image.
Example five
In this embodiment, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
running an application program to be tested and displaying an application interface to be tested;
obtaining a to-be-tested image containing at least part of the to-be-tested application interface based on the to-be-tested application interface;
Acquiring a reference image;
respectively extracting pixels of the image to be detected and the reference image, and correspondingly obtaining a pixel group to be detected and a reference pixel group;
and calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Acquiring a target URL;
And sending a downloading request to a server according to the target URL, downloading the application program to be tested from the server and installing the application program.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a preset jump sequence;
And running the application program to be tested, and sequentially triggering corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested and display the application interface to be tested.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Obtaining a screenshot to be tested for the screenshot of the application interface to be tested;
Acquiring the length and the width of a cutting area;
acquiring an anchoring coordinate of a cutting area;
And cutting the image of the screenshot to be detected based on the anchoring coordinates and the length and the width of the cutting area to obtain the image to be detected containing at least part of the application interface to be detected.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting whether the sum of the differences of the reference pixel array and the pixel array to be detected is greater than or equal to a threshold value;
When the sum of the differences is greater than or equal to the threshold value, determining the outline of a difference point of the image to be detected and the reference image;
drawing a difference graph according to the difference point outline;
and generating the similarity detection result according to the difference graph.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Converting the reference image into a first gray level image, and converting the image to be detected into a second gray level image;
respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image;
and determining the difference point outline of the standard binarized image and the reference binarized image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
drawing the difference graph on the image to be detected;
And/or
And drawing the difference graph on the reference image.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. The page similarity detection method is characterized by comprising the following steps of:
running an application program to be tested and displaying an application interface to be tested;
obtaining a to-be-tested image containing at least part of the to-be-tested application interface based on the to-be-tested application interface;
Acquiring a reference image;
respectively extracting pixels of the image to be detected and the reference image, and correspondingly obtaining a pixel group to be detected and a reference pixel group;
and calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
2. The method of claim 1, wherein the step of running the application under test and displaying the application under test interface comprises:
acquiring a preset jump sequence;
And running the application program to be tested, and sequentially triggering corresponding elements based on the preset jump sequence so as to jump the display page of the application program to be tested and display the application interface to be tested.
3. The method of claim 1, wherein the step of obtaining a test image including at least a portion of the application interface under test based on the application interface under test comprises:
Obtaining a screenshot to be tested for the screenshot of the application interface to be tested;
Acquiring the length and the width of a cutting area;
acquiring an anchoring coordinate of a cutting area;
And cutting the image of the screenshot to be detected based on the anchoring coordinates and the length and the width of the cutting area to obtain the image to be detected containing at least part of the application interface to be detected.
4. The method according to claim 1, wherein calculating the difference between the pixel group to be measured and the reference pixel group to obtain the similarity detection result includes:
detecting whether the sum of the differences of the reference pixel array and the pixel array to be detected is greater than or equal to a threshold value;
When the sum of the differences is greater than or equal to the threshold value, determining the outline of a difference point of the image to be detected and the reference image;
drawing a difference graph according to the difference point outline;
and generating the similarity detection result according to the difference graph.
5. The method of claim 4, wherein the step of determining the difference point profile of the image under test and the reference image when the sum of the differences is greater than or equal to the threshold value comprises:
Converting the reference image into a first gray level image, and converting the image to be detected into a second gray level image;
respectively carrying out binarization processing on the first gray level image and the second gray level image to generate a standard binarization image and a reference binarization image;
and determining the difference point outline of the standard binarized image and the reference binarized image.
6. The method of claim 4, wherein the step of plotting a difference map from the difference point profile comprises:
drawing the difference graph on the image to be detected;
And/or
And drawing the difference graph on the reference image.
7. The method according to any one of claims 1-6, wherein the step of running the application under test and displaying the application under test interface further comprises, prior to:
Acquiring a target URL;
And sending a downloading request to a server according to the target URL, downloading the application program to be tested from the server and installing the application program.
8. A page similarity detection device, characterized by comprising:
the display module is used for running the application program to be tested and displaying the application interface to be tested;
The first acquisition module is used for acquiring a to-be-detected image containing at least part of the to-be-detected application interface based on the to-be-detected application interface;
the second acquisition module is used for acquiring a reference image;
the pixel extraction module is used for respectively extracting pixels of the image to be detected and the reference image, and correspondingly obtaining a pixel group to be detected and a reference pixel group;
and the calculation module is used for calculating the difference between the pixel group to be detected and the reference pixel group to obtain a similarity detection result.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202410125036.6A 2024-01-29 2024-01-29 Page similarity detection method and device, electronic equipment and storage medium Pending CN118397308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410125036.6A CN118397308A (en) 2024-01-29 2024-01-29 Page similarity detection method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410125036.6A CN118397308A (en) 2024-01-29 2024-01-29 Page similarity detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118397308A true CN118397308A (en) 2024-07-26

Family

ID=91993077

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410125036.6A Pending CN118397308A (en) 2024-01-29 2024-01-29 Page similarity detection method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118397308A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120492360A (en) * 2025-07-18 2025-08-15 苏州市世为科技有限公司 A mobile application GUI intelligent testing method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120492360A (en) * 2025-07-18 2025-08-15 苏州市世为科技有限公司 A mobile application GUI intelligent testing method and system

Similar Documents

Publication Publication Date Title
CN111061526B (en) Automatic test method, device, computer equipment and storage medium
CN107783898B (en) Test method and test equipment for mobile application
CN107025174B (en) Method, device and readable storage medium for user interface anomaly test of equipment
US10810113B2 (en) Method and apparatus for creating reference images for an automated test of software with a graphical user interface
CN110287101A (en) User interface automated testing method, device, computer equipment and storage medium
CN113282488B (en) Terminal test method and device, storage medium and terminal
CN111179268B (en) Abnormality detection method and device for vehicle-mounted terminal and vehicle-mounted terminal
US20170192797A1 (en) User interface layout comparison
CN110765015B (en) Method for testing tested application and electronic equipment
CN107885493B (en) Program creation support method and program creation support device
CN118397308A (en) Page similarity detection method and device, electronic equipment and storage medium
CN109815127B (en) Automatic script conversion method and device, computer equipment and storage medium
CN113495844A (en) Automatic testing method, device and system based on virtual click and storage medium
CN112905451A (en) Automatic testing method and device for application program
CN108845924B (en) Control response area display control method, electronic device, and storage medium
CN110795000B (en) Automatic control method and device based on interface segmentation and terminal
CN112633341A (en) Interface testing method and device, computer equipment and storage medium
CN118069949B (en) Dynamic layout method and system based on component tree architecture
CN114253841A (en) Test script generation method and device and storage medium
CN113836037A (en) Interface interaction test method, device, equipment and storage medium
CN114756448B (en) A user interface restoration automatic testing system and method
CN116719736A (en) Test case generation method and device for testing software interface
CN111459831B (en) Test system and test method
CN115357488A (en) Method, device, electronic device and storage medium for automated testing
CN112188192A (en) Code stream adaptability test method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination