Detailed Description
The following is a clear and complete description of the technical method of the present invention, taken in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In order to achieve the above objective, referring to fig. 1 to 3, the present invention provides a method for constructing a software test management framework and a metric system, the method includes the following steps:
step S1, acquiring code structure data of a software system, wherein the code structure data comprises code hierarchy data, call relation data and service function data; carrying out recognition analysis on a critical path of the software system according to the code structure data, carrying out test coverage rate evaluation, and mapping the coverage effect under different test scenes to obtain coverage characteristic diagram data;
step S2, performing simulation analysis on the execution efficiency of different test paths according to the coverage characteristic diagram data to obtain execution efficiency data, establishing a test resource allocation model according to the test parameters corresponding to the different paths and the execution efficiency data, and performing real-time adjustment on a test strategy according to a preset coverage rate target and the test resource allocation model to obtain strategy adjustment data;
Step S3, obtaining defect characteristic data of different testing stages, performing cause analysis on the defect characteristic data, performing correlation extraction to obtain defect distribution data, performing defect prediction according to the defect distribution data and a testing process control model, and performing optimal testing scheme acquisition to obtain optimized testing strategy data;
Step S4, acquiring performance index data of a test execution process, constructing a test stability model according to the performance index data, evaluating the reliability of the test process according to the test stability model, and correcting the traveling factors to obtain stability scoring data;
and S5, carrying out efficiency parameter alignment on the stability scoring data and the optimized test strategy data so as to obtain comprehensive test scheme data.
In the embodiment of the present invention, referring to fig. 1, which is a schematic flow chart of steps of a method for constructing a software test management frame and a measurement system according to the present invention, in this example, the method for constructing a software test management frame and a measurement system includes the following steps:
step S1, acquiring code structure data of a software system, wherein the code structure data comprises code hierarchy data, call relation data and service function data; carrying out recognition analysis on a critical path of the software system according to the code structure data, carrying out test coverage rate evaluation, and mapping the coverage effect under different test scenes to obtain coverage characteristic diagram data;
When the embodiment of the invention acquires the code structure data of the software system, the source code is analyzed first, and the data such as the module level, the relation between the class and the interface and the like are identified through the code level analysis, so as to obtain the code level data. Meanwhile, call relation analysis is carried out on codes, function call chains, method dependencies, call modes and the like are identified, and call relation data are generated. In addition, a function identification model is constructed through service function annotation and code mapping, and the core service flow is identified to generate service function data. And integrating the data to obtain complete code structure data. And then, carrying out critical path identification based on the code structure data, generating a program control flow graph through detection of the loop structure and the condition judgment node, calculating the circle complexity of the code block, carrying out path weight assignment according to coverage rate requirements, and generating critical path data. And finally obtaining coverage characteristic map data through coverage rate evaluation of the critical path data and mapping under different test scenes. In practical applications, for example, in the transaction module test in a financial system, the high-frequency calling module can be set as a critical path area and given higher weight so as to ensure the full coverage of the core business process.
Step S2, performing simulation analysis on the execution efficiency of different test paths according to the coverage characteristic diagram data to obtain execution efficiency data, establishing a test resource allocation model according to the test parameters corresponding to the different paths and the execution efficiency data, and performing real-time adjustment on a test strategy according to a preset coverage rate target and the test resource allocation model to obtain strategy adjustment data;
According to the coverage characteristic diagram data obtained in the step S1, the embodiment of the invention simulates the execution time, the resource consumption and the concurrency performance of each test path, and the concurrent scene is simulated through multi-thread simulation, so that the execution efficiency data is obtained. And constructing a multi-objective optimization model according to the execution efficiency data and path test parameters (such as input parameter types and parameter ranges) so as to determine an optimal test resource allocation scheme, and evaluating and adjusting the model according to a preset coverage rate target. The real-time adjustment strategy comprises execution sequence optimization, resource occupation optimization and the like, strategy adjustment data are generated, and a multi-dimensional test monitoring system comprising progress monitoring, resource utilization monitoring and the like is constructed based on the strategy adjustment data, so that a test process control model is obtained. In practice, path simulation requires setting specific transaction load parameters and resource configurations for high load application system (e.g., e-commerce transaction system) testing to ensure efficient execution under pressure.
Step S3, obtaining defect characteristic data of different testing stages, performing cause analysis on the defect characteristic data, performing correlation extraction to obtain defect distribution data, performing defect prediction according to the defect distribution data and a testing process control model, and performing optimal testing scheme acquisition to obtain optimized testing strategy data;
The embodiment of the invention collects defect reports at different stages of testing, and extracts the characteristic data such as the triggering condition, the influence range, the recurrence probability and the like of the defects. And clustering the characteristic data, analyzing according to the dimensions of code quality, business logic, security holes and the like to obtain defect cause data, mining a defect association mode, and constructing a defect propagation link diagram. And combining the defect distribution characteristic data and the test process control model, and identifying a high risk area through the prediction of the defect propagation trend so as to generate optimized test strategy data. In the application scenario, for example, in the security test of a banking system, aiming at the defects of a network communication module, vulnerability propagation risks can be identified through correlation analysis, and optimization strategies of priority test and resource allocation are formulated.
Step S4, acquiring performance index data of a test execution process, constructing a test stability model according to the performance index data, evaluating the reliability of the test process according to the test stability model, and correcting the traveling factors to obtain stability scoring data;
The embodiment of the invention acquires performance index data in the test execution process, including test execution time, resource occupancy rate and the like, performs statistical analysis on the data, calculates stability based on parameters such as success rate, result consistency and the like, and obtains stability parameter data. And (3) constructing a stability model to evaluate the reliability in the test process, carrying out cause analysis on fluctuation factors (such as environmental fluctuation and load fluctuation) through test execution data, identifying and correcting the fluctuation factors, and generating final stability scoring data. For example, in a real-time data system for high frequency data processing, if insufficient configuration of environmental resources is encountered, the fluctuation factor is corrected and the execution stability is improved, so as to ensure the consistency and repeatability of the result.
And S5, carrying out efficiency parameter alignment on the stability scoring data and the optimized test strategy data so as to obtain comprehensive test scheme data.
According to the embodiment of the invention, the stability scoring data obtained in the step S4 and the optimized test strategy data in the step S3 are subjected to efficiency alignment, the execution efficiency and the resource utilization equivalence index are extracted, strategy parameters are extracted according to the optimized strategy, and the comprehensive test scheme data is finally formed by establishing a multi-dimensional mapping relation, so that the strategy implementation plan and the resource allocation scheme are aligned to the dimensions of time, resource, coverage, quality and the like. Under the test scene of the complex ERP system, the comprehensive test efficiency and quality can be effectively improved through time alignment (such as peak-to-valley test adjustment) and resource alignment (priority resource allocation to a core flow).
Preferably, step S1 comprises the steps of:
step S11, acquiring source code data of a software system, carrying out code hierarchical structure analysis on the source code data so as to obtain code hierarchical data, wherein the code hierarchical structure analysis comprises directory structure analysis of code files, hierarchical relation analysis of modules, inheritance relation extraction between classes and interface realization relation recognition;
The embodiment of the invention firstly extracts the source code file of the software system and stores the source code file in a database so as to analyze the code structure. The directory structure of the code file is traversed through a directory structure analysis tool (such as an AST analyzer or a custom script), and the path information, the file layer and other data of each file are generated into a directory structure diagram. Then, the hierarchical relationships between the modules are identified using a static code analysis tool (e.g., sonarQube) to extract the dependency hierarchy and interface location for each module. For object-oriented languages, a class parsing tool is further used to extract inheritance relationships and interface implementation relationships between classes and generate class diagrams. And finally obtaining code level data containing the catalogue, the module level, the class inheritance relationship and the interface. In complex ERP systems, the hierarchical analysis helps to identify business logic and interface locations, providing basic data for subsequent call chain analysis.
Step S12, carrying out calling relation identification based on function calling, dependency relation among methods and synchronous and asynchronous calling modes according to source code data, and extracting calling chain depth information so as to obtain calling relation data;
The embodiment of the invention utilizes a function call analysis tool (such as Doxygen or Understand) to identify the method call relation in the code file on the basis of source code analysis. When the calling relation is analyzed, the information of the callee and the caller of each function is extracted, the function dependency relation is analyzed, synchronous and asynchronous calling modes are identified, and the calling type is marked. On the basis, the depth of the calling chain is extracted through a recursion algorithm, and the maximum depth and the average depth of the calling chain are calculated to form a calling chain data structure. And finally integrating the call relation information to form complete call relation data. Taking an e-commerce platform system as an example, synchronous and asynchronous calls in the order management module can be identified, so that the stability of the performance of high-frequency calls is ensured.
Step S13, establishing a mapping relation between functions and codes according to service function comments in the source code data, and carrying out core service flow identification according to calling relation data so as to obtain service function data;
The embodiment of the invention firstly obtains the service function annotation in the source code file, and identifies the keyword describing the service function from the annotation through a natural language processing technology (such as keyword extraction). And then, according to the call relation data, establishing a mapping relation between the annotation key and the code module and between the annotation key and the function, thereby generating a corresponding relation between the function and the code. And marking service function nodes in the core service flow, such as modules of order processing, payment flow and the like, and constructing a core service flow chart. And finally, obtaining complete service function data. For example, in banking systems, core function modules such as funds transfers, account inquiries, etc. may be identified and individually marked and tracked for subsequent analysis.
Step S14, merging the code hierarchy data, the calling relation data and the service function data into code structure data;
The embodiment of the invention combines the code hierarchy data obtained in the step S11, the call relation data obtained in the step S12 and the service function data of the step S13. The merging process aligns the calling relation with the code hierarchical structure in a data mapping mode, and the clarity of the module, class and interface hierarchy of each calling chain is ensured. Then, the function mapping relation is combined with the call chain data to generate comprehensive code structure data containing the structure, the function and the dependency relation. For multi-level calling service modules in a banking system, the combined code structure data can display the calling path of each service module and the corresponding functional module, so that the identification of the subsequent key path is facilitated.
S15, carrying out identification analysis on a key path of the software system according to the code structure data, and carrying out weight assignment so as to obtain path weight data;
The embodiment of the invention identifies the critical path of the software system through a graph traversal algorithm (such as depth-first search) based on the code structure data obtained in the step S14. In the traversal process, the path is assigned with weight according to the calling depth of the module, the service importance of the functional module and other dimensions, so that the path of the core service module is ensured to have higher weight. And finally obtaining path weight data containing each path weight. For risk analysis modules of financial systems, funds-related paths may be preferentially set as critical paths and high-weight marked to ensure that such paths are preferentially covered in testing.
And S16, carrying out coverage rate evaluation according to the path weight data, and mapping the coverage effect under different test scenes so as to obtain coverage characteristic diagram data.
The embodiment of the invention uses the path weight data generated in the step S15 for evaluating the test coverage rate. And monitoring the code coverage effect under different test scenes by using a coverage rate analysis tool (such as JaCoCo or LCOV), and recording the coverage condition of each test scene on different weight paths. And mapping the coverage rate and the weight path in the data visualization process to generate coverage characteristic map data so as to show the association relation between the test scene and the coverage rate. The method is applied to a high concurrency system, a pressure test scene can be set, and coverage rate comparison can be carried out so as to verify the critical path coverage condition of the system in the concurrency scene.
The invention can make sure the relation between the organization structure and the modules of the code by deep hierarchical structure analysis of the source code data of the software system. The analysis covers directory structure analysis, module hierarchy relation, inheritance relation of class and identification of interface realization relation, so that testers can comprehensively know the structure and the dependence of codes, and complexity areas and potential risk points of the system can be identified more easily. The detailed code hierarchy data lays a solid foundation for subsequent testing and analysis, and improves the pertinence of the test coverage rate and the effectiveness of test planning. Identifying dependencies between function calls and methods, including synchronous and asynchronous call patterns, helps to understand the complexity of code execution flow. Extraction of call chain depth information can reveal key code paths and potential performance bottleneck points. The analysis enables a test engineer to focus on complex and key call paths, helps to determine high priority areas in test coverage, and optimizes test strategies and resource allocation, thereby improving test efficiency and coverage comprehensiveness. The mapping of functions to codes is performed according to service function annotations in the source codes, which is helpful to identify the parts of the codes directly related to service requirements. The core business process is identified by combining the call relation data, so that the testing process can be ensured to focus on the most critical business logic path. The mapping relation enables a tester to determine which code segments are associated with the core service functions, ensures that key service flows are covered in the test, and reduces service risks caused by the fact that important logic is not tested. The code hierarchy data, the call relation data and the business function data are combined into complete code structure data, and a comprehensive and detailed view is provided for a software system. The integrated data supports more complex analysis, such as critical path identification and weight assignment, and helps to more accurately evaluate the complexity and criticality of the software system, and promote a more efficient and targeted test process. The integrated data provides a unified data source for the following coverage rate evaluation and path analysis, and the systematicness and consistency of the test are enhanced. The critical paths of the software system are identified by analysis of the code structure data and weight assignments are made to help determine which paths are critical to the operation and function of the system. This weight assignment enables the tester to test the most important code paths preferentially, ensuring that the reliability and performance of critical functions are fully verified. The weight data not only helps to optimize the test resources, but also can realize risk management in the test process, and test omission or resource waste is reduced. And the coverage rate evaluation is carried out based on the path weight data, so that a test engineer can clearly know the coverage effect under different test scenes. The coverage profile provides a visualization tool for mapping test coverage and efficiency, helping to identify code portions that have not been completely covered and weak points on critical paths. The visual and data driven coverage assessment improves the transparency and operability of the test, so that test engineers can make more intelligent test strategy adjustment, and the coverage and effectiveness of the whole test are improved.
Preferably, step S15 comprises the steps of:
step S151, carrying out code branching logic and cyclic structure analysis according to the code structure data, and identifying condition judgment nodes and cyclic entry points so as to obtain program control flow diagram data;
The embodiment of the invention firstly utilizes a static analysis tool (such as Control Flow Graph Generator) to analyze the code branching logic and the cyclic structure in the code structure data to generate a control flow graph of the code. In the parsing process, condition judgment nodes (such as if-else sentences) and loop entry points (such as for and while loops) in the program are identified. And the control flow graph data is finally obtained by displaying the control flow graph in a structured way and clearly showing the positions of all logic branches and circulation structures in the code. In a large online transaction system, complex business logic can involve a multi-layer condition judgment and circulation structure, and the execution path of code logic can be effectively monitored and analyzed through a control flow graph.
Step S152, counting the calling frequency of code blocks of the program control flow graph data, and marking the code blocks with the calling frequency of the first 20% as a high-frequency calling area so as to obtain high-frequency calling code data;
The embodiment of the invention counts the calling frequency of each code block based on the program control flow diagram data. The number of calls to the code block is obtained by running a history data record or performance analysis tool (e.g., gprof, perf). And marking the code blocks with calling frequencies of the first 20% as high-frequency calling areas according to the statistical result. Finally, high-frequency calling code data are formed. For a real-time response system, such as a transaction matching engine, the system bottleneck can be processed preferentially by marking the high-frequency calling area, so that the execution efficiency of the core function module is improved.
Step 153, performing exception handling logic analysis on the program control flow diagram data, and marking a code block containing exception handling as an exception handling area so as to obtain exception handling area data;
The embodiment of the invention further analyzes the exception handling logic of each code block on the basis of the program control flow diagram data. A static code analysis tool (e.g., findBugs or PMD) is used to identify code blocks containing exception handling statements, particularly try-catch statements, exception capture and throw, and the like. And marking the code blocks containing the exception handling as an exception handling area, and outputting the data of the exception handling area. Taking a financial system as an example, some transaction interface modules have complex exception capture logic, and by marking the exception handling area data, the exception handling condition of key functions can be emphasized.
Step S154, extracting code blocks related to data processing and calculation on the program control flow diagram data, and marking the code blocks as a core logic area so as to obtain core logic code data;
the embodiment of the invention utilizes a control flow graph to identify code blocks related to data processing and calculation logic. Based on code annotation or variable analysis, code blocks that process large amounts of data or perform complex calculations are extracted and labeled as core logical areas. And finally obtaining the core logic code data. For large data processing systems, the core logic is typically a computationally intensive code, and by identifying the core logic, the accuracy and performance of the data computation may be of great concern in testing.
Step S155, performing circle complexity calculation on the code blocks according to the program control flow graph data to obtain code circle complexity data, wherein the circle complexity calculation is specifically that the circle complexity = the number of edges in the control flow graph-the number of nodes +2 in the control flow graph;
The embodiment of the invention calculates the circle complexity of each code block according to the program control flow diagram data. The specific calculation formula is circle complexity = number of edges in control flow graph-number of nodes in control flow graph +2 x number of connected components in control flow graph. Higher circle complexity means greater code complexity and higher risk. And calculating to obtain the code circle complexity data. In a high-frequency transaction system, a high-circle-complexity code block often needs to pay attention to reduce the maintenance difficulty of codes and improve the running stability.
Step S156, code division is carried out according to the high-frequency calling code data, the exception handling area data, the core logic code data and the code circle complexity data, so that key path segment data and non-key path segment data are obtained;
The embodiment of the invention divides the code into a critical path section and a non-critical path section according to the obtained high-frequency calling code data, the obtained exception handling area data, the obtained core logic code data and the obtained code circle complexity data. The high-frequency calling area, the exception handling area, the core logic area and the high-circle complexity code block are preferentially classified into critical path segments, and the rest are non-critical path segments, so that the critical path segment data and the non-critical path segment data are finally obtained. This partitioning is particularly important in high concurrency systems, which can help identify high load paths to improve system performance and reliability.
And S157, carrying out weight assignment on the critical path segment data and the non-critical path segment data based on the test coverage rate so as to obtain path weight data, wherein the weight assignment is specifically that the value range of the weight coefficient of the critical path segment is 1.2-1.5, and the value range of the weight coefficient of the non-critical path segment is 0.8-1.0.
The embodiment of the invention respectively endows the critical path section and the non-critical path section with test coverage rate weight based on the path dividing result. The weight coefficient value range of the key path section is set to be 1.2-1.5, and the weight coefficient value range of the non-key path section is set to be 0.8-1.0. Through the differentiated setting of the weight coefficients, the testing strategy is focused on the key path segment, so that the code blocks with high priority are ensured to obtain higher testing coverage rate, and the path weight data are finally generated. The method is applied to a complex service system, ensures high coverage rate of a critical path section, and can remarkably improve the stability of the system on a core service flow.
The invention can construct a complete program control flow diagram by analyzing the branch logic and the loop structure in the code structure and identifying the condition judgment node and the loop entry point. This process helps reveal the execution path and potential complexity of the code, providing a basis for subsequent test path selection and coverage assessment. The generation of the program control flow graph data enables development and testing teams to visualize and understand the decision and loop structure of software, improves the comprehensiveness and effectiveness of testing, and particularly ensures that different paths are properly tested. The method is favorable for identifying the core module with higher use frequency in the software by counting the call frequency of each code block in the program control flow and marking a high-frequency call area. These high frequency called code blocks are often critical to the performance and stability of the system and should therefore be given higher priority during testing. Marking high-frequency call data not only helps to identify the key areas of the test, but also can optimize the test resource allocation, ensure that the most important codes are fully tested, and reduce potential performance bottlenecks and system crash risks. By analyzing the exception handling logic in the program control flow graph and marking the exception handling area, code blocks that handle errors and exception conditions can be efficiently identified. Such analysis is critical to improving the robustness and fault tolerance of the system, as the exception handling portion typically involves handling of unexpected input and edge conditions. Ensuring test coverage of these code blocks can help prevent runtime errors and unhandled exceptions, thereby improving the reliability of the system. By extracting blocks of core logic code from the program control flow graph that relate to data processing and computation, and marking them as core logic areas, it is ensured that the test team is focusing on code portions that are critical to system functionality. The core logic is typically where the system performs its primary functions, and testing these areas can better verify the business logic of the system, preventing critical functional defects. Marking these areas also helps the tester to prioritize testing, ensuring that the business value and functional integrity of the system is guaranteed. The loop complexity calculation is performed on the code block, and the complexity of the code is quantified by a formula calculation (loop complexity = number of edges in control flow graph-number of nodes in control flow graph +2 number of connected components in control flow graph). Code blocks of high loop complexity typically mean more complex logic, prone to errors. Identifying these high complexity areas can help testers focus on these parts in the test, increasing the depth and pertinence of the test coverage to reduce potential code defects and improve maintainability. And performing code division by integrating high-frequency call code data, exception handling area data, core logic code data and circle complexity data to form a critical path section and a non-critical path section. Critical path segments typically contain the core functions and high complexity code that the system runs on, critical to the overall performance and stability of the system. The paths are marked, so that test resources are reasonably distributed by testers, the test plans are more targeted and prioritized, and the most important paths are preferentially tested under the condition of limited resources. By performing weight assignment based on test coverage on critical path segments and non-critical path segments, test priority can be specified in the test strategy. The weight coefficients of the critical path segments are set to 1.2-1.5, reflecting the importance of these paths, requiring higher test attention. And the weighting coefficients of the non-critical path segments are set to 0.8-1.0, it is indicated that the priority of these paths is relatively low. the weight assignment mechanism can help a test team reasonably plan and execute the test, so that the test coverage rate is matched with the actual risk of the system, and the test efficiency and the overall quality of the system are improved.
Preferably, step S156 includes the steps of:
and determining the code segments which are positioned in the high-frequency calling area or the exception handling area and belong to the core logic area and have the complexity of the code circles exceeding a preset threshold value as critical path segments according to the high-frequency calling code data, the exception handling area data, the core logic code data and the complexity data of the code circles, and otherwise, determining the code segments as non-critical path segments, thereby obtaining the critical path segment data and the non-critical path segment data.
The embodiment of the invention further screens the code segments according to the obtained high-frequency calling code data, the obtained exception handling area data, the obtained core logic code data and the obtained code circle complexity data, and aims to accurately identify the key path segments. The method comprises the steps of firstly screening code blocks located in a high-frequency calling area or an exception handling area by comparing high-frequency calling code data with exception handling area data, and then extracting code segments belonging to a core logic area from the screened code blocks to ensure that the code segments have the high-frequency calling or exception handling functions and bear key business logic. And then, calculating the circle complexity of the screened code segments, and comparing the calculated circle complexity value with a preset threshold value. If the loop complexity exceeds a preset threshold (e.g., 10), the code segment is deemed to be a critical path segment and vice versa. Finally, the code segments meeting the conditions are merged into critical path segment data, and other code segments are merged into non-critical path segment data. When the screening mechanism is applied to a high-security financial transaction system, the high-frequency calling and complex exception processing paths of the core transaction module can be ensured to obtain higher coverage test priority through the screening mechanism, and the robustness and the response capability of the system are further enhanced.
The method combines various indexes (high-frequency call, exception handling, core logic and circle complexity) for determining the data of the critical path segment, and is helpful for improving the accuracy of evaluation. A single index may not fully reflect the importance of the code, and combining these factors may ensure that the code segments critical to the system are focused on. Thus, a tester can determine which code segments need higher attention, and the overall effectiveness of the test is improved. Code segments which are located in a high frequency call zone or an exception handling zone, belong to a core logic zone, and have loop complexity exceeding a preset threshold can be identified by screening code segments which meet the condition that the code segments are at high risk. These code segments are often sources of system performance bottlenecks, potential flaws, or complex logic that may cause problems. Thus, marking these code segments as critical path segments helps the tester more specifically troubleshoot potential serious problems in the test. Since test resources are generally limited, code segments meeting multiple conditions are determined as critical path segments so that the test resources can be more intensively used for portions having the greatest influence on system stability and performance. This prioritization helps to optimize test time and cost, ensuring that high value tests are performed adequately. By testing critical path segments (i.e., code segments that meet conditions) with emphasis, it is possible to ensure that exception handling and complex logic are effectively verified, reducing the risk of unprocessed exceptions or logic errors occurring during actual operation. The marking of exception handling zones is particularly important because it ensures robustness of the system in marginal situations, improving the system's ability to cope with unexpected inputs and exceptional situations. Code segments with loop complexity exceeding a preset threshold typically indicate that the code is relatively complex, and may present maintainability and readability problems. By marking these complex regions into the key path segments and increasing the strength of testing and optimization, developers can discover and simplify redundant logic or optimization algorithms, thereby improving the overall maintainability of the code. This multi-index comprehensive judgment method helps to ensure that the test coverage stays not only on the surface, but deep into the most important and complex code segments in the system. The areas are tested more comprehensively, the depth and the breadth of test coverage can be effectively improved, and the possibility of test omission is reduced.
Preferably, step S2 comprises the steps of:
step S161, performing weighted test coverage calculation according to the path weight data to obtain test coverage evaluation data, wherein the weighted test coverage= (critical path section weight coefficient, critical path coverage line number + non-critical path section weight coefficient, non-critical path coverage line number)/total code line number;
The embodiment of the invention carries out weighting processing on the coverage line numbers of the critical path and the non-critical path section according to the path weight data, thereby calculating the weighted test coverage rate. The specific implementation method comprises the steps of firstly, extracting the coverage lines of a critical path segment and a non-critical path segment, for example, the coverage line of the critical path is 300 lines, the coverage line of the non-critical path is 700 lines, and then applying a weighted test coverage calculation formula to 1000 lines of the total line according to a weight coefficient (assuming that the weight coefficient of the critical path segment is 1.4 and the weight coefficient of the non-critical path segment is 0.9), wherein the calculated weighted test coverage is (1.4300 + 0.9700)/1000=1.05, so as to form coverage evaluation data for further analyzing the coverage effect.
Step S162, performing uncovered code segment feature analysis on code blocks lower than a preset test coverage threshold in the test coverage evaluation data to obtain uncovered code type data, wherein the types of the uncovered code type data comprise high-frequency calling without covering, abnormal processing without covering and core logic without covering;
The embodiment of the invention performs feature analysis on the code blocks lower than the preset coverage rate threshold in the coverage rate evaluation data. The specific method comprises the steps of extracting types and function labels of the uncovered codes, and dividing the uncovered code segments into three cases of high-frequency calling without covering, exception handling without covering and core logic without covering. For example, when a high frequency call but uncovered code segment is found to be 20 lines, it is classified as a high frequency call uncovered type. The uncovered code type data is obtained by the method, and a foundation is laid for the next test case generation.
Step S163, generating test case template data of different types according to the uncovered code type data, wherein the test case template is loaded, the test case template is abnormally injected and the boundary value test case template is abnormally injected;
According to the embodiment of the invention, the test case template data of different types are generated according to the uncovered code type data. The method comprises the steps of generating a load test case template if an uncovered code segment belongs to a high-frequency call but is of an uncovered type, generating an abnormal injection test case template if the uncovered code segment belongs to abnormal processing but is of an uncovered type, and generating a boundary value test case template if the uncovered code segment belongs to core logic but is of an uncovered type. For example, in a load test case template generated by high-frequency calling of an uncovered area, load conditions for simultaneous access by multiple users can be set to ensure the integrity of the test coverage features.
Step S164, carrying out priority ordering based on the weight coefficient on the test case template data so as to obtain supplementary test suggestion data;
The embodiment of the invention carries out priority ordering based on the weight coefficient on the generated test case template data so as to determine the supplement priority of the test. The specific method is that priority coefficients are set for different types of test case templates, for example, the priority coefficient of the critical path load test is set to be 1.5, and the priority coefficient of the boundary value test is set to be 1.2. The supplemental test suggestions are ordered according to these priorities, e.g., high-priority test cases are preferentially generated, thereby forming prioritized supplemental test suggestion data.
And step S165, mapping the coverage effect under different test scenes according to the supplementary test proposal data so as to obtain coverage characteristic diagram data.
The embodiment of the invention maps the coverage effect of different test scenes according to the supplementary test proposal data to generate new coverage characteristic diagram data. The specific method comprises the steps of executing use cases in the supplementary test proposal data, recording the increase of the actual coverage line number, and updating the coverage characteristic diagram. For example, after adding a load test case, the coverage profile shows a critical path coverage increase from 85% to 95%. And the optimized coverage effect is intuitively displayed through the coverage characteristic diagram data, so that a basis is provided for adjustment of a test strategy and further test optimization.
The invention uses the path weight data to perform weighted test coverage calculation to reflect the importance of different code segments. By weighting critical and non-critical path segments separately and combining the number of coverage lines to calculate overall coverage, it is ensured that the test coverage is not just a simple line count, but a more accurate reflection of important areas of the system. The method more truly embodies the coverage of system stability and core functions, and is beneficial to test managers to better evaluate the validity of the test. And identifying which code segments belong to an uncovered high-frequency calling area, an abnormal processing area or a core logic area by carrying out feature analysis on the code blocks below a preset test coverage rate threshold. This can help the test team quickly locate key codes that are ignored in the test, avoiding potential risk being missed. This step ensures that the test not only covers "easy-to-test" codes, but also focuses on those important parts that may hide the defect but be ignored by the test. And generating test case templates of different types, such as a load test case template, an abnormal injection test case template and a boundary value test case template, according to the uncovered code type data. The method for strategically generating the test cases is beneficial to improving the pertinence and the efficiency of the test. Particularly in a complex system, a proper test case template can be generated according to the uncovered code types, so that the time for manually writing the test case is shortened, and the test quality is improved. And carrying out priority sorting based on the weight coefficient according to the generated test case template data, and ensuring that the most important test tasks are executed preferentially. Such a ranking strategy may optimize test resource allocation so that a test team can prioritize the code segments that have the greatest impact on the system and highest risk. By ordering according to the weight coefficients, maximum test coverage and quality are facilitated within a limited time and resource. And mapping the coverage effect under different test scenes according to the supplementary test suggestion data to generate coverage characteristic map data. The mapping provides visual coverage analysis, so that a tester can intuitively see the change and improvement effect of the test coverage under different scenes. The method is not only helpful for quickly identifying weak links in the test, but also helps to verify the validity of the test strategy after adjustment, and supports continuous improvement of the test process. These steps work together to form a complete set of data-driven test optimization flows. The whole flow aims at improving the comprehensiveness, the accuracy and the effectiveness of the test by weighting coverage rate calculation, identifying uncovered code features, generating templates, prioritizing and mapping effects. It enables a test team to efficiently allocate test resources, focusing on high risk and critical code segments. The missing detection probability of the defects is reduced, and the core function and the edge condition of the system are effectively verified through uncovered feature analysis and specific use case template generation. The reliability of the system is improved, and the depth and the breadth of the test are improved through targeted supplementary test and priority sequencing. Finally, the method ensures that the test coverage rate is not only a statistic number, but also can actually reflect the index of the stability and the robustness of the system, and provides a higher level of support for the software quality assurance.
Preferably, step S2 comprises the steps of:
S21, constructing a test path execution model according to the coverage characteristic diagram data, and performing simulation analysis on the execution time, the resource consumption and the concurrency performance of different test paths to obtain execution efficiency data;
according to the embodiment of the invention, a test path execution model is constructed according to the coverage characteristic diagram data, and simulation analysis is carried out on the execution time, the resource consumption and the concurrency performance of each test path, so that the execution efficiency data is obtained. Firstly, establishing a simulation model of path execution according to a critical path and a non-critical path marked in the coverage characteristic diagram. In the model, simulation calculations are run by setting the execution time, resource consumption, and the number of concurrent threads for each path. For example, increasing the resource allocation by 10% on the critical path, the effect of its execution time reduction is observed. And acquiring average execution efficiency data through multiple simulation, recording the execution efficiency difference between a critical path and a non-critical path, and providing a reference for resource allocation.
S22, acquiring input parameter types, parameter ranges and test data characteristics of different test paths, and establishing a test resource allocation model based on multi-objective optimization according to execution efficiency data;
The embodiment of the invention acquires the input parameter types, the parameter ranges and the test data characteristics of different test paths, and establishes a multi-objective optimized test resource allocation model based on the execution efficiency data. The specific method comprises the steps of firstly extracting the parameter characteristics of each path, for example, the parameter type of a critical path A is integer, the parameter range is [0, 100], and the test data characteristics are high concurrent processing requirements. And then taking the execution efficiency data as constraint conditions, utilizing a multi-objective optimization algorithm (such as a genetic algorithm), comprehensively optimizing the resource allocation and the test coverage rate objective, and finally obtaining a test resource allocation model. For example, less resources are allocated for paths with high execution efficiency, while the resource allocation ratio is increased for critical paths with low execution efficiency to achieve the best balance of coverage and resource usage.
Step S23, real-time evaluation is carried out on the test resource allocation model according to a preset coverage rate target, and a test execution strategy is dynamically adjusted based on an evaluation result, so that strategy adjustment data is obtained, wherein the strategy adjustment comprises test case execution sequence adjustment, resource allocation proportion adjustment and concurrency adjustment;
According to the embodiment of the invention, the test resource allocation model is evaluated in real time according to the preset coverage rate target, and the test execution strategy is dynamically adjusted based on the evaluation result, so that strategy adjustment data are obtained. Setting a coverage rate target (such as 90%), periodically monitoring the difference between the current coverage rate of each path and the target, and adjusting each resource in a resource allocation model. For example, when the coverage rate of a certain critical path is lower than 80%, the execution sequence of the test cases is advanced through strategy adjustment, so that the allocation of CPU and memory resources is increased, and the concurrency is improved. And after the strategy is dynamically regulated, strategy regulation data are generated according to the new execution condition, so that the test coverage rate is ensured to be achieved.
Step S24, constructing a multi-dimensional test monitoring system comprising test progress monitoring, resource utilization rate monitoring and coverage rate achievement degree monitoring according to the strategy adjustment data, so as to obtain test monitoring index data;
according to the embodiment of the invention, a multi-dimensional test monitoring system comprising test progress monitoring, resource utilization rate monitoring and coverage rate achievement monitoring is constructed according to strategy adjustment data, so that test monitoring index data is obtained. The method comprises the steps of setting a monitoring module to record various indexes of test execution in real time, such as resource utilization rate per second, real-time coverage rate achievement degree and execution progress of test cases. The system updates the monitoring parameters according to the strategy adjustment data so as to ensure the synchronization of all monitoring indexes. The generated test monitoring index data comprises resource utilization (such as 80%), coverage rate achievement (such as 85%) and test progress (such as 60%), and data support is provided for further optimization of the test process.
And S25, carrying out time sequence analysis and trend prediction on the test monitoring index data, and establishing a feedback regulation mechanism so as to obtain a test process control model, wherein the control model comprises test progress deviation correction control, resource use efficiency optimization control, coverage rate achievement progress control and test quality guarantee control.
The embodiment of the invention performs time sequence analysis and trend prediction on the test monitoring index data, and establishes a feedback regulation mechanism so as to obtain a test process control model. The method comprises the steps of carrying out time sequence analysis on collected monitoring index data, for example, analyzing time sequence trend of resource utilization rate and coverage rate achievement degree, and predicting coverage rate achievement progress in the future by using a linear regression or ARIMA model. And (3) establishing a feedback regulation mechanism according to the prediction result, and carrying out test progress deviation correction, resource use efficiency optimization and coverage rate achievement control. For example, when a test progress lag is predicted, test resources are added or test strategies are adjusted to form a control model containing progress, resources and quality assurance.
According to the invention, a test path execution model is constructed based on the coverage characteristic diagram data, and simulation analysis of execution time, resource consumption and concurrency performance of the test path is performed. Through simulation analysis, the method can identify which test paths have lower execution efficiency and more consumed resources, and help test teams to optimize test strategies in time. By evaluating different paths, the test resources can be more accurately distributed, invalid test execution is reduced, and efficient execution of important paths is ensured, so that the efficiency of the whole test process is improved. By acquiring input parameters, parameter ranges and test data characteristics of different test paths and combining execution efficiency data, a test resource allocation model based on multi-objective optimization is established. The optimization model comprehensively considers various resource requirements (such as execution time, resource consumption, concurrency performance and the like), ensures that the resource allocation meets the test target and simultaneously can improve the test efficiency as much as possible. Through multi-objective optimization, resource waste can be avoided, and the rationality and the resource utilization rate of test execution are improved. And carrying out real-time evaluation according to a preset coverage rate target and a test resource allocation model, and dynamically adjusting a test execution strategy based on an evaluation result. The dynamic adjustment mechanism comprises adjustment of the execution sequence, the resource allocation proportion and the concurrency of the test cases, so that the test process can reach the expected coverage rate target to the maximum extent in each stage. the real-time adjustment can effectively cope with the possible changes in the test process, such as the problems of delay of the test progress, resource bottleneck and the like, ensures that the test task can be completed on time, and improves the test efficiency. Based on the strategy adjustment data, a multi-dimensional test monitoring system is constructed, and the multi-dimensional test monitoring system comprises test progress monitoring, resource utilization rate monitoring and coverage rate achievement degree monitoring. The multi-dimensional monitoring system can track various key indexes in the testing process in real time, help testers to know the testing progress and the resource use condition in time, and quickly discover potential problems to make adjustments. By monitoring the key data, the smooth achievement of the test target can be ensured, and the resource waste and time delay are avoided. And (3) establishing a feedback regulation mechanism by carrying out time sequence analysis and trend prediction on the test monitoring index data, and finally obtaining a test process control model. The control model comprises correction of test progress deviation, optimization of resource utilization efficiency, control of coverage rate achievement progress and guarantee control of test quality. Through real-time feedback adjustment, a test team can timely find and deal with deviation, bottleneck and quality problems in the test, and the test process is ensured to be more stable and efficient. The control model keeps balance of test progress and quality in dynamic adjustment, and excessive adjustment or resource waste in the test process is prevented. The method has the advantages of combining the steps, reducing unnecessary test path execution through simulation analysis and resource optimization allocation, enabling test work to be more efficient, ensuring flexibility and adaptability in a test process through real-time evaluation and dynamic adjustment of a test strategy, effectively avoiding delay of a test plan, reasonably allocating resources through a multi-objective optimization model, improving resource utilization rate to the greatest extent, avoiding waste, ensuring test quality to be ensured through a multi-dimensional monitoring and feedback control mechanism, reducing missing test risk, and helping a test team to make timely decision adjustment in a dynamic environment through monitoring and analysis of key indexes such as test progress, resource utilization condition, coverage rate and the like. finally, these steps ensure the comprehensiveness, scientificity and efficiency of the test, helping the team of items to achieve optimal test coverage and quality assurance within a limited time and resource.
Preferably, step S3 comprises the steps of:
Step S31, obtaining defect report data of different testing stages, and extracting characteristics of the defect according to the severity, the influence range, the triggering condition and the recurrence probability of the defect report data, so as to obtain defect characteristic data;
The embodiment of the invention obtains the defect characteristic data by extracting data from defect reports in different testing stages and analyzing the severity degree, the influence range, the triggering condition and the recurrence probability of the defects. The specific method is that firstly, the field information in the defect report file is analyzed, such as severity (such as high, medium and low), influence range (module or function range), triggering condition (specific input or operation step) and reproduction probability (such as 80%). And then, carrying out feature extraction and quantification on each defect record, assigning the severity of the defect to be 3, 2 and 1, converting the recurrence probability into a percentage, and finally generating structured defect feature data, and providing input for further analysis.
Step S32, carrying out cluster analysis on the defect characteristic data, and carrying out multidimensional cause analysis based on code quality, business logic, performance efficiency and security holes so as to obtain defect cause data;
The embodiment of the invention performs cluster analysis on the defect characteristic data, and performs multi-dimensional cause analysis by combining the dimensions of code quality, business logic, performance efficiency, security hole and the like to obtain defect cause data. The method specifically comprises the steps of identifying the aggregation mode of the defects by adopting a clustering algorithm (such as K-means or DBSCAN) based on defect characteristic data. And then, each defect cluster is associated with factors such as code quality (such as circle complexity), service logic (such as a core function path), performance (such as a high-frequency calling area), security holes (such as input verification holes) and the like, the influence of each cause factor on the defects is determined through a statistical analysis method, and the cause data are recorded, so that basis is provided for association analysis.
S33, carrying out association rule mining according to defect cause data, identifying a correlation mode among defects, and establishing a defect propagation link diagram so as to obtain defect association data;
According to the embodiment of the invention, association rule mining is carried out according to the defect cause data, a correlation mode among defects is identified, and a defect propagation link diagram is established to obtain defect association data. The specific method comprises the steps of analyzing association relations among different features in defect cause data by using an association rule mining algorithm (such as Apriori or FP-growth), and extracting frequently-occurring defect combination modes. For example, when the circle complexity of a certain functional module is high and the call frequency is high, the probability and severity of the defect recurrence is also high. And constructing a defect propagation link diagram according to the association relations, identifying propagation paths of defects among different modules, and generating defect association data.
Step S34, integrating and analyzing the defect cause data and the defect associated data to obtain defect distribution data, wherein the defect distribution data comprises module level defect distribution characteristics, function level defect distribution characteristics and code level defect distribution characteristics;
The embodiment of the invention integrates and analyzes the defect cause data and the defect associated data to obtain defect distribution data, and covers the defect distribution characteristics of a module level, a function level and a code level. The specific method comprises the steps of integrating the defect cause and associated data, and analyzing the distribution characteristics of defects at different levels of the system. Modules and functional paths with higher defect densities are identified by hierarchical statistics, such as analyzing the number and type distribution of defects per module. Finally, defect distribution data including module level, function level and code level are formed, providing reference for subsequent prediction of high risk areas.
Step S35, predicting a potential high-risk defect area according to the defect distribution data and the test process control model, so as to obtain defect prediction data;
According to the embodiment of the invention, the potential high-risk defect area is predicted according to the defect distribution data and the test process control model, so that the defect prediction data is obtained. The method comprises the steps of combining a high defect density module of defect distribution data, and predicting a high risk area possibly exposed in future testing according to coverage rate and resource use condition in a test process control model. For example, regions of high circle complexity and not fully covered in the code hierarchy are emphasized and predicted for probability of defect recurrence by time series analysis, generating defect prediction data.
And step S36, carrying out test scheme adjustment based on the test resource priority allocation strategy, the test case design optimization strategy, the test execution sequence adjustment strategy and the automatic test coverage strategy according to the defect prediction data, so as to obtain optimized test strategy data.
According to the defect prediction data, the embodiment of the invention formulates a test scheme adjustment strategy, including a test resource priority allocation strategy, a test case design optimization strategy, a test execution sequence adjustment strategy and an automatic test coverage strategy, so as to obtain optimized test strategy data. According to the information of the high-risk area in the defect prediction data, test resources are preferentially allocated, such as testers and time are increased, test case designs (such as boundary value increase and abnormal processing test cases) are optimized for the corresponding area, and test execution sequences are adjusted to enable the high-risk area to be preferentially executed. Meanwhile, the automatic test coverage is enhanced, and the stability of a key area is ensured. And finally, an optimized test strategy is formed, and the comprehensiveness and efficiency of defect detection are improved.
The invention can more accurately understand the property and influence of each defect by extracting each characteristic of the defect in detail. For example, the manner in which serious defects and low priority defects are handled may vary, and the scope of influence and trigger conditions can help the test team predict other problems that may be caused by the defects. The defect characteristic data provides basic data support for further cluster analysis, cause analysis and prediction, and ensures that the follow-up steps have data-driven accuracy. Through multidimensional cause analysis, root causes behind the defects, such as code quality problems, logic errors and performance or safety problems, can be identified, so that a targeted strategy is provided for repair. The cause analysis can help identify areas which are easy to generate defects in different fields, and a test team can concentrate on high-risk areas to carry out deep test according to analysis results, so that the effectiveness and quality of the test are improved. By mining association rules between defects, it can be revealed which defects may trigger each other, or repair of one defect may result in exposure of other defects. This provides decision support for defect management and repair strategies. The defect propagation link map is helpful for visualizing the interdependence relationship between defects, so that a team can predict the influence of the defects after repair, and help to make more effective repair decisions. Through the defect distribution analysis of different layers such as a module level, a function level, a code level and the like, which parts in the system are most prone to generating defects can be identified, so that a basis is provided for demarcating key test areas. The different levels of defect distribution information facilitate a rational arrangement of test strategies and repair orders. If a module or function is frequently defective, the module is repaired or tested preferentially, so that the overall quality of the system is improved. By integrating the defect distribution data and the control model, high-risk areas possibly existing in the system can be identified in advance, so that a test team can conduct denser and finer tests in a targeted manner, and the risk of defect missed detection is reduced. Aiming at the predicted high-risk area, the test team can reasonably allocate more test resources, concentrate force to attack hardness, and improve the test effect. The method and the device can avoid resource waste and improve the pertinence and efficiency of the test by preferentially distributing the resources to the high-risk areas, optimize the design of the test case according to the defect prediction data, ensure coverage to a potential high-risk scene, reduce the risk of missing key defects, and more flexibly cope with new conditions in the test process by dynamically adjusting the test execution sequence and the automatic test coverage strategy, thereby ensuring the maximization of the test effect. The steps work together to construct a comprehensive defect management and optimization test strategy flow, so that the defect prediction and management accuracy can be effectively improved, and data-driven decision support is provided for a test team. The method has the advantages that the high-risk areas can be identified more accurately through the data-driven defect analysis, the resource allocation is optimized, the resource waste is reduced, the potential high-risk defects can be predicted and prevented through the defect cause analysis and the association rule mining, the later repair cost is reduced, the comprehensiveness and the depth of the test are ensured through the key coverage of the high-risk areas and the optimization of the test strategy, and therefore the overall test quality is improved.
Preferably, step S4 comprises the steps of:
Step S41, performance index data in the test execution process is obtained, wherein the performance index data comprise a test execution time index, a resource occupation index, a test result stability index and a test environment availability index;
The embodiment of the invention collects performance index data from the test execution process, and covers the test execution time (such as average execution time of each test case), resource occupation (such as CPU (Central processing Unit) and memory use condition), stability of a test result (such as fluctuation range of the result) and availability of a test environment (such as environment running time and downtime). The method comprises the steps of monitoring each test execution in real time, acquiring the index data through a monitoring system, and recording the index data into a database for subsequent stability analysis. For example, during a large software test, the resource occupation of each module at different test stages is recorded, and accurate data is provided for subsequent evaluation.
Step S42, carrying out statistical analysis on the performance index data, and carrying out stability parameter calculation based on the execution success rate of the test cases, the consistency ratio of the test results, the available time length ratio of the test environment and the utilization efficiency of the test resources, so as to obtain stability parameter data;
The embodiment of the invention performs statistical analysis on the collected performance index data and calculates relevant parameters of test stability, wherein the parameters comprise the execution success rate of the test case (the ratio of successful test cases), the consistency ratio of the test results (the ratio of the results in multiple executions), the available time length ratio of the test environment (the availability of the test environment in a preset time) and the utilization efficiency of the test resources (the ratio of the actual occupation of the resources to the preset resources), so as to obtain the stability parameter data. The specific method comprises the steps of counting numerical distribution of each performance index, and carrying out quantitative calculation of stability parameters by adopting a formula. For example, when calculating the test result consistency ratio, the consistency ratio is judged by comparing the results of the same test for a plurality of times, and stability parameter data is generated to reflect the test environment and the stability of execution.
The reliability of the test process is evaluated based on the test stability model, so that reliability evaluation data are obtained, wherein evaluation contents comprise test process consistency evaluation, test result repeatability evaluation and test environment reliability evaluation;
According to the embodiment of the invention, a test stability evaluation model is constructed according to the stability parameter data, and the evaluation model structure comprises test process consistency evaluation, test result repeatability evaluation and test environment reliability evaluation. The method comprises the following steps of substituting stability parameter data into an evaluation model to calculate the overall reliability score of the test process. The model comprises weight settings of a plurality of evaluation indexes, and a weighted average method is applied to obtain a final evaluation score. For example, the reliability of the test environment is 30%, the repeatability of the test result is 40%, the consistency of the test process is 30%, and by combining the weights and the parameters, reliability evaluation data are generated, and the repeatability and the consistency of the test process under different scenes are judged.
Step S44, carrying out cause analysis on abnormal fluctuation in the reliability evaluation data, and identifying fluctuation factors including environment factor fluctuation, data factor fluctuation, load factor fluctuation and configuration factor fluctuation so as to obtain fluctuation factor data;
The embodiment of the invention analyzes the reasons of the abnormal fluctuation items in the reliability evaluation data, identifies factors possibly causing fluctuation, including environmental factors (such as hardware faults or network delays), data factors (such as data deviation or data integrity), load factors (such as test load peak time) and configuration factors (such as abnormal changes of system parameters), and generates fluctuation factor data. The specific method comprises the steps of identifying abnormal values through a data mining technology, classifying the abnormal values, and analyzing the reasons of each type of abnormality. For example, in load factor analysis, the relationship of high load and test result fluctuation for a specific period of time is identified, and such information is included in the load factor data, so that subsequent correction is facilitated.
And step S45, carrying out fluctuation factor correction according to the fluctuation factor data, and carrying out test stability score calculation according to the test stability model so as to obtain stability score data.
The embodiment of the invention corrects various fluctuations according to the fluctuation factor data, such as adjusting the resource configuration in the peak period through a load balancing strategy or recovering and adjusting the abnormal environment configuration. The stability score is then recalculated based on the corrected data. The method comprises the steps of substituting the performance index subjected to fluctuation correction into a test stability model again to obtain corrected stability scoring data, wherein the scoring range can be set between 0 and 100, and the scoring range is used for intuitively displaying the stability condition of the test system. In a complex test scene, the modified score is ensured to be higher than a certain preset threshold value, and then higher-level test steps can be further performed, so that the overall test quality is ensured.
The invention can comprehensively understand the performance of the testing process from different dimensions by collecting a plurality of key performance indexes, and helps to identify the problems of testing bottleneck or resource waste and the like. The performance data provides basic data support for analysis in subsequent steps, and ensures that the evaluation in the aspects of test stability, reliability, environmental availability and the like is sufficient. Through statistical analysis of key performance indexes, potential problems in test execution can be found in time. For example, a low success rate may indicate that the test case is not reasonably designed, and a low environment available time length ratio may indicate that the test environment is unstable. Stability of the test can be quantified by calculating the stability parameters, unstable factors in the test process are revealed, and a data basis is provided for optimizing the test process. The stability parameters can provide accurate test process assessment, help the test team adjust strategies according to actual data, and improve stability and efficiency of the test process. By establishing a stability evaluation model, the stability of the test process can be quantitatively analyzed and evaluated systematically, and the consistency of the test process, the repeatability of the result and the reliability of the environment are ensured. The consistency, the repeatability and the environmental reliability of the test process are evaluated based on the model, so that the reliability of the test can be comprehensively improved, and the test result is ensured to have higher reliability. Through the reliability evaluation data, a test team can make more scientific decisions according to the specific reliability performance of the test, and optimize the test strategy and flow. By analyzing the abnormal fluctuation, the root cause affecting the test reliability can be accurately identified. For example, environmental factor fluctuations may be a critical cause of test instability, while load factor fluctuations may lead to inconsistent test results. Through detailed analysis of fluctuation factors, problems in the test can be targeted, and clear guidance is provided for optimizing test environments, data processing or resource allocation and the like. The fluctuation reason is found in time and adjusted, so that unnecessary errors or instability in the test process can be effectively avoided, and the influence on the test result is reduced. By means of fluctuation factor correction, the testing environment, configuration, data or load can be properly adjusted, so that testing stability and repeatability are improved. the stability score provides a quantitative indicator for the overall stability of the test, so that the test team can clearly understand the stability level in the test process and formulate subsequent optimization measures accordingly. The stability score obtained finally is helpful to improve the overall quality of the test by optimizing and correcting the fluctuation factors, and ensures the consistency and reliability of the test result under different environments. Through comprehensive performance data acquisition, stability analysis, fluctuation analysis and correction, the problems in the testing process are ensured to be identified and repaired in time, and the stability and reliability of the test are improved. By constructing a stability evaluation model, calculating stability scores and analyzing the cause of abnormal fluctuation, accurate data support can be provided for a test team, so that decisions are more scientific and reasonable. By effectively correcting fluctuation factors in the test process, unnecessary resource waste and time waste can be reduced, and the test efficiency and the overall effect are improved. Through fluctuation factor correction and test stability scoring, a test team can dynamically adjust a test strategy according to real-time data, and reliability and effect of a test process are ensured. Finally, the steps provide a solid data base and decision support for testing stability, reliability, resource utilization efficiency and quality assurance, and help to improve the overall efficiency and quality of the software testing process.
Preferably, step S5 comprises the steps of:
Step S51, extracting efficiency indexes including an execution efficiency index, a resource utilization index, a coverage achievement index and a defect discovery index from the stability scoring data, thereby obtaining efficiency index data;
The embodiment of the invention extracts the efficiency of execution, resource utilization, coverage achievement and defect discovery equivalent rate indexes from the stability scoring data to generate efficiency index data. The specific method comprises the steps of extracting and calculating test execution efficiency (such as average execution time), resource utilization (such as CPU and memory utilization), coverage achievement (such as the ratio of the number of covered code lines to the total number of lines) and defect discovery number by utilizing various performance data contained in stability scoring data, so as to obtain various efficiency index data. For example, when efficiency extraction is performed, average execution time and resource consumption of all test paths are calculated, so that subsequent test resource allocation and efficiency improvement are facilitated.
Step S52, carrying out strategy decomposition on the optimized test strategy data, and extracting strategy parameters so as to obtain strategy parameter data;
According to the embodiment of the invention, policy decomposition is carried out on the optimized test policy data, the main test policies are refined, and various policy parameters are extracted, so that the policy parameter data is obtained. The method comprises the steps of decomposing the execution sequence, the resource allocation proportion, the concurrency and the like of the test cases in the optimized test strategy, quantitatively extracting various strategy parameters, for example, setting the execution sequence to be the test cases in the high risk area, adjusting the resource allocation proportion according to the test scene requirement, and setting the concurrency in a proper range so as to adapt to different test requirement scenes. The policy parameter data thus obtained will serve as the basis for the subsequent mapping relationship.
Step S53, establishing a multi-dimensional mapping relation according to the efficiency index data and the strategy parameter data so as to obtain parameter mapping data, wherein the multi-dimensional degree comprises time dimension alignment, resource dimension alignment, coverage dimension alignment and quality dimension alignment;
According to the embodiment of the invention, a multi-dimensional mapping relation is established according to the extracted efficiency index data and the strategy parameter data, and parameter mapping data is generated. The specific method comprises the steps of constructing a mapping of time dimension (such as test time alignment), resource dimension (such as resource utilization optimization), coverage dimension (such as test priority allocation for improving coverage), quality dimension (such as defect rate control) and other dimensions by taking efficiency indexes and strategy parameters as two ends. For example, the method comprises the steps of matching the resource allocation proportion with the resource utilization rate, preferentially scheduling the efficient test cases of the resources, aligning the coverage achievement index with the test coverage dimension to ensure that the test coverage rate of the important function reaches the target requirement, and obtaining more accurate parameter mapping data.
And S54, carrying out implementation strategy formulation of a staged implementation plan, a resource allocation scheme, risk countermeasure and a quality assurance scheme according to the parameter mapping data, thereby obtaining comprehensive test scheme data.
According to the embodiment of the invention, a staged implementation plan, a resource allocation scheme, risk countermeasure measures and a quality assurance scheme are formulated according to the parameter mapping data, so that comprehensive test scheme data is generated. The method comprises the steps of firstly making a staged test plan according to a time dimension, arranging targets and priorities of each test stage, then distributing required test resources according to a resource dimension, optimizing test environment and concurrent task configuration, making countermeasures according to a risk assessment result, such as configuring the test resources preferentially for a high risk area and configuring a corresponding rollback scheme, finally ensuring that coverage rate and defect rate are controlled within acceptable ranges according to a quality dimension, and arranging a periodic quality assessment and feedback mechanism in the quality assurance scheme so as to adjust a test strategy in time.
The invention can comprehensively evaluate the efficiency of the test process by extracting the efficiency indexes of different dimensions, including the execution efficiency, the resource use efficiency, the achievement degree of test coverage and the effect of defect discovery. This helps to quickly identify in which aspects the test procedure can be further optimized. By converting different test dimensions into specific efficiency indexes, the actual effect of the test activity can be quantified, and data support is provided for improvement of the test. The extraction of the resource utilization index can help to identify the condition of resource waste or unbalanced allocation, so that the resource allocation is optimized, and the test efficiency is improved. By means of strategy decomposition and parameter extraction, the specific direction of the test strategy can be clearly optimized. For example, it may involve optimizing the order of test execution, the proportion of resource allocation, or the degree of concurrency. After the strategy parameters are extracted, fine management can be realized, and the testing strategy is adjusted according to specific parameters, so that the optimization measures of each link are ensured to be targeted and effective. The complex optimization strategy is disassembled into specific parameters, so that a test team can be helped to formulate an optimization scheme with higher operability, and the implementation process is more efficient. Through multidimensional mapping, test targets (such as time, resources, coverage rate and quality) with different dimensions can be organically combined, so that the targets with all key dimensions are balanced and reasonably adjusted in the process of optimizing strategies. The multi-dimensional alignment can help to control the testing process more accurately, ensure that different aspects of the test (such as resource usage and quality assurance) can work cooperatively, and improve the overall effect of the test. By establishing mapping relations of different dimensions, cross-dimension data support can be provided for a test team, and a decision maker is helped to make comprehensive and data-driven decisions when facing complex situations. By making a staged implementation plan, the testing work can be ensured to be orderly carried out according to the steps, and meanwhile, the resource waste and the repeated labor are avoided. Each stage may be optimized according to specific objectives and requirements. According to the resource allocation scheme, reasonable allocation of the test resources in each stage can be ensured, resource shortage or excessive waste is avoided, and the efficiency and benefit of the test are improved. potential risks can be effectively identified and measures can be taken in advance in the testing process through the establishment of risk coping measures and quality assurance schemes, and meanwhile, the testing quality and stability are guaranteed, and the testing reliability is improved. The comprehensive test scheme is comprehensively planned from multiple dimensions, and various aspects of the test process are guaranteed to be fully considered, so that a systematic management framework is provided for the success of the test. By extracting and mapping the efficiency indexes and the strategy parameters of multiple dimensions, various aspects of the testing process, including time, resources, coverage rate and the like, can be comprehensively optimized on the premise of ensuring the testing quality. The strategy decomposition and the parameter extraction enable the adjustment of the test strategy to be more accurate and have operability, and are helpful for continuously adjusting and optimizing the test scheme according to the real-time data. Through the multi-dimensional mapping and the formulation of the resource allocation scheme, the test resources can be more reasonably allocated, the resource waste is reduced, and the test efficiency is improved. Through the integration of risk coping and quality assurance schemes, the potential risk in the testing process is effectively controlled, and the testing quality is effectively guaranteed. The steps provide comprehensive decision support for the test team, help the test team to make data-driven decisions, ensure that all links in the test process are optimized, and improve the overall test effect.
The invention also provides a system for constructing the software test management framework and the measurement system, which is used for executing the method for constructing the software test management framework and the measurement system, and the system for constructing the software test management framework and the measurement system comprises the following components:
The code analysis module is used for acquiring code structure data of the software system, wherein the code structure data comprises code hierarchy data, calling relation data and service function data;
the system comprises an efficiency optimization module, a test resource allocation model, a strategy adjustment data, a multi-dimensional test monitoring system, a test process control model, a test resource allocation model and a control module, wherein the efficiency optimization module is used for performing simulation analysis on the execution efficiency of different test paths according to coverage characteristic diagram data to obtain execution efficiency data;
The system comprises a defect prediction module, a defect distribution data acquisition module, a defect prediction module, a test strategy data acquisition module and a test strategy data acquisition module, wherein the defect prediction module is used for acquiring defect characteristic data in different test stages;
The stability evaluation module is used for acquiring performance index data of the test execution process, constructing a test stability model according to the performance index data, evaluating the reliability of the test process according to the test stability model, and correcting the traveling factors so as to obtain stability scoring data;
and the scheme integration module is used for carrying out efficiency parameter alignment on the stability scoring data and the optimized test strategy data so as to obtain comprehensive test scheme data.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.