[go: up one dir, main page]

CN118246527B - A method and system for predicting quality of abrasive jet cold cutting - Google Patents

A method and system for predicting quality of abrasive jet cold cutting Download PDF

Info

Publication number
CN118246527B
CN118246527B CN202410209518.XA CN202410209518A CN118246527B CN 118246527 B CN118246527 B CN 118246527B CN 202410209518 A CN202410209518 A CN 202410209518A CN 118246527 B CN118246527 B CN 118246527B
Authority
CN
China
Prior art keywords
expressed
cutting
new
individuals
cutting process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410209518.XA
Other languages
Chinese (zh)
Other versions
CN118246527A (en
Inventor
武光华
周文喆
靳天宇
孙喜瑞
张庆芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin Teachers Institute of Engineering and Technology
Original Assignee
Jilin Teachers Institute of Engineering and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin Teachers Institute of Engineering and Technology filed Critical Jilin Teachers Institute of Engineering and Technology
Priority to CN202410209518.XA priority Critical patent/CN118246527B/en
Publication of CN118246527A publication Critical patent/CN118246527A/en
Application granted granted Critical
Publication of CN118246527B publication Critical patent/CN118246527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0499Feedforward networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/04Manufacturing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Manufacturing & Machinery (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Primary Health Care (AREA)
  • Tourism & Hospitality (AREA)
  • Physiology (AREA)
  • Genetics & Genomics (AREA)
  • Complex Calculations (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a method and a system for predicting abrasive jet cold cutting quality, and belongs to the technical field of abrasive jet prediction. Firstly, collecting cutting experiment data; secondly, carrying out normalization processing on the experimental data, and dividing the experimental data into a training set and a testing set; then constructing a radial basis function neural network model; then optimizing radial basis function neural network model parameters by using a genetic algorithm; finally, evaluating the prediction performance of the radial basis function neural network model by using a test set, and calculating the error between the predicted value and the actual value; judging whether the error accords with an error range, and if so, outputting a radial basis function neural network model; if the error range is not met, retraining the radial basis function neural network model. The invention adopts the radial basis function neural network model and the genetic algorithm, can effectively simulate the nonlinear relation in the abrasive jet cold cutting process, and improves the accuracy and stability of cutting quality prediction.

Description

Abrasive jet flow cold cutting quality prediction method and system
Technical Field
The invention belongs to the technical field of abrasive jet prediction, and particularly relates to a method and a system for predicting abrasive jet cold cutting quality.
Background
Abrasive jet cold cutting is a technique for cutting various materials by utilizing a liquid-solid two-phase jet in which abrasive particles are mixed in a high-pressure water jet. The abrasive jet cold cutting has the advantages of low cutting temperature, high cutting precision, wide cutting range, small environmental pollution and the like, and is widely applied to the fields of industry, ocean engineering, aerospace and the like. However, the quality of the abrasive jet cold cut is affected by a number of factors, such as the cutting process parameters, the abrasive particle characteristics, the properties of the material being cut, etc.
At present, two main methods for predicting the cold cutting quality of abrasive jet flow are: a physical model-based approach and a data-driven based approach. The method based on the physical model is to establish a mathematical model according to the physical mechanism of abrasive jet cold cutting, and obtain the predicted value of cutting quality by solving the equation of the model. The method has the advantages of reflecting the intrinsic law of abrasive jet cold cutting, but also has the disadvantages of complex model building process, difficult model parameter determination, long model solving time consumption and the like. The method based on data driving is to build a data model according to experimental data of abrasive jet cold cutting by utilizing technologies such as machine learning, artificial intelligence and the like, and obtain a predicted value of cutting quality by training parameters of the model. The method has the advantages of being capable of rapidly and accurately predicting the cutting quality, but also has the defects of poor generalization capability of the model, low interpretation of the model, poor stability of the model and the like. Thus, there is a need to develop a method of abrasive jet cold cutting prediction that combines physical mechanisms with data driving.
Disclosure of Invention
Based on the technical problems, the invention provides a method and a system for predicting the cold cutting quality of abrasive jet, and aims to accurately predict and optimize the cutting quality by collecting cutting experimental data and based on a neural network model and a genetic algorithm.
The invention provides a method for predicting the cold cutting quality of abrasive jet, which comprises the following steps:
step S1: collecting cutting experiment data; the experimental data comprise cutting process parameters and cutting processing quality indexes;
Step S2: carrying out normalization processing on the experimental data, and dividing the experimental data into a training set and a testing set;
Step S3: constructing a radial basis function neural network model; the radial basis function neural network model input layer is a cutting process parameter, the output layer is a cutting processing quality index, the activation function is a Gaussian function and the loss function is a mean square error;
Step S4: optimizing radial basis function neural network model parameters by using a genetic algorithm; the parameters comprise the number of hidden layer neurons, a center vector, a width, a connection weight and bias;
Step S5: evaluating the prediction performance of the radial basis function neural network model by using a test set, and calculating an error between a predicted value and an actual value; judging whether the error accords with an error range, and if so, outputting a radial basis function neural network model; if the error range is not met, retraining the radial basis function neural network model.
Optionally, the collecting the cutting experiment data specifically includes:
The cross-sectional area a of the abrasive jet is expressed as:
Wherein d is the nozzle diameter; alpha is the half apex angle of the jet; l is the target distance;
The velocity V of the abrasive jet is expressed as:
wherein Q is the abrasive jet flow; v is jet velocity;
The kinetic energy E k of the jet is expressed as:
Wherein E k is the kinetic energy of abrasive jet flow; ρ is the abrasive jet density;
the pressure P of the jet is expressed as:
Wherein P is abrasive jet pressure;
The impact force F of the jet is expressed as:
Wherein F is the impact force of abrasive jet; θ is the angle between the axis and the surface of the cut material;
The loss level LR is expressed as:
Wherein LR is loss degree; m s is the mass of the sample after cutting; m a is the mass of abrasive material consumed from the nozzle during cutting; m n is the mass of the nozzle worn during cutting; ρ a is the abrasive density; ρ n is the nozzle density; t is cutting time; the E is the wear coefficient of the nozzle; m 0 is the mass of the material before cutting, m 0=ms+ma+mn;
The energy consumption EC is expressed as:
Wherein EC is energy consumption; w e is the electric energy consumed in the cutting process; w w is water energy consumed in the cutting process, and eta e is electric energy conversion efficiency; ρ w is the density of water; c w is the specific heat capacity of water; delta T is the temperature rise of water;
the cutting accuracy CP is expressed as:
In the formula, CP is cutting precision, and refers to similarity between a cut after cutting and an expected cut; Δl is the deviation of the cut length from the expected length after cutting; Δw is the deviation of the cut width from the intended width after cutting; r a is the kerf surface roughness after dicing.
Optionally, the normalizing the experimental data specifically includes:
wherein N is the number of experimental data; x is a cutting process parameter matrix; y is a cutting processing quality index matrix;
Data normalization is expressed as:
Wherein, x ij and y ij are respectively the cutting process parameter and the cutting processing quality index of the ith sample; x ij 'and y ij' are respectively the cutting process parameter and the cutting processing quality index of the ith sample after normalization; x j and y j are the cutting process parameter and the cutting quality index of the j-th column respectively; min (x j) and max (x j) are the minimum and maximum values, respectively, of the cutting process parameters of the j-th column; min (y j) and max (y j) represent the minimum and maximum values of the cutting quality index of the j-th column, respectively.
Optionally, the constructing a radial basis function neural network model specifically includes:
The radial basis function neural network model input layer is a cutting process parameter; the output layer is a cutting processing quality index; the number of neurons of the hidden layer is M;
the activation function of the hidden layer is expressed as:
Wherein x is the data of the input layer, namely the normalized cutting process parameters; c I is the center vector of the I-th hidden layer neuron; σ I is the width parameter of the I-th hidden layer neuron; phi is a radial basis function, i.e., a Gaussian function; the term "I". I "is the Euclidean norm;
The output function of the output layer is expressed as:
Wherein y J (x) is the output value of the neuron of the J-th output layer, namely the normalized cutting processing quality index; w IJ is the connection weight of the I-th hidden layer neuron to the J-th output layer neuron; b J is the bias of the J-th output layer neuron;
the loss function is expressed as:
Wherein y kJ is the actual value of the J-th cutting quality index of the k-th sample; A predicted value of a J-th cut quality index for a k-th sample; the number of training set samples, i.e., the number of portions of experimental data.
Optionally, the optimizing the radial basis function neural network model parameter by using a genetic algorithm specifically comprises:
initializing parameters, namely setting population scale as S, maximum evolution algebra as G max, initial crossover probability as P c, initial mutation probability as P m, elite individual number as E, current algebra as g=0, upper limit of hidden layer neuron number as M max, value range of central vector and width parameter as [0,1], and value range of connection weight and bias as [ -1,1];
Generating an initial population, randomly generating S individuals, wherein each individual consists of the number M of hidden layer neurons, a central vector C= [ C 1,c2,...,cM ], a width parameter sigma= [ sigma 12,…,σM ], a connection weight W= [ W 11,w12,…,wM3 ] and a bias B= [ B 1,b2,b3 ], and the expression is as follows:
Xs=[M,C,∑,W,B],s=1,2,…,S
wherein X s is the s-th individual; m is an integer, and M is more than or equal to 1 and less than or equal to M max; C. sigma, W and B are real vectors, and satisfy 0.ltoreq.C, sigma.ltoreq.1, -1.ltoreq.W, B.ltoreq.1;
Fitness is calculated, for each individual X s, an RBF neural network is constructed from its parameters, its Loss function Loss s is calculated using a training set, and then it is converted into fitness function F s, expressed as:
wherein iota is a small positive number for avoiding zero denominator;
selecting S individuals to enter the next generation by using a roulette method, and reserving E elite individuals with highest fitness, namely E individuals not participating in crossing and mutation operations, and directly copying to the next generation;
crossover operation, using adaptive crossover probabilities, is expressed as:
Wherein P c,s is the s-th individual crossover probability; g is the current algebra; g max is the maximum algebra;
For each pair of adjacent individuals, performing crossover operation according to crossover probability, namely exchanging part or all parameters, and generating two new individuals, wherein the method specifically comprises the following steps:
the intersection of the hidden layer neuron numbers M randomly selects one intersection p, and then swaps the first p-bit binary codes of M of two individuals to generate two new M values, expressed as:
M′1=M1[1:p]+M2[p+1:Mmax]
M′2=M2[1:p]+M1[p+1:Mmax]
Wherein M 1 and M 2 are the original M values of two individuals; m '1 and M' 2 are new M values for two individuals; m [ l: ζ represents the binary encoding of the first to ζ bits of M; satisfy p is more than or equal to 1M max -1 or less;
The intersection of the center vectors C randomly selects one intersection point a, then swaps the first a elements of the two individual C, generating two new C vectors, denoted as:
wherein, C '1 and C' 2 are new C vectors for two individuals; c represents the B-th center vector element of the s-th individual; a is more than or equal to 1 and less than or equal to M min;Mmin and M mm=min(M′1,M′2);
intersection of the width parameters sigma, randomly selecting one intersection r, then exchanging the first r elements of the two individual sigma, generating two new sigma vectors, denoted as:
Where Σ '1 and Σ' 2 are the new Σ vectors of two individuals; σ represents the gamma-th center vector element of the s-th individual; r is more than or equal to 1 and less than or equal to M min;
the intersection of the connection weights W randomly selects one intersection f, and then swaps the first f elements of W of two individuals, generating two new W vectors, expressed as:
Wherein W '1 and W' 2 are new W vectors for two individuals; w sIJ is the connection weight of the ith hidden layer neuron to the jth output layer neuron representing the s-th individual; f is more than or equal to 1 and less than or equal to 3M min;
Offset B's intersection, randomly selecting one intersection h, then exchanging the first h elements of B for two individuals, generating two new B vectors, expressed as:
B′1=[b11,b12,…,b1h,b2h+1,b2h+2,…,b23]
B′2=[b21,b22,…,b2h,b1h+1,b1h+2,…,b13]
Wherein B '1 and B' 2 are new B vectors for two individuals; b sJ is the bias of the J-th output layer neuron representing the s-th individual; h is more than or equal to 1 and less than or equal to 3;
A mutation operation, using an adaptive mutation probability, expressed as:
Wherein P m,s is the probability of variation of the s-th individual; g is the current algebra; g max is the maximum algebra;
For each individual, performing mutation operation according to mutation probability, namely performing tiny disturbance on part or all parameters to generate a new individual, wherein the method specifically comprises the following steps:
The variation of the hidden layer neuron number M randomly selects one cross point o, and then the binary code of the o bit is turned over to generate a new M value, which is expressed as:
M′=M[1:o-1]+M[o]+M[o+1:Mmax]
Wherein M' is a new M value; m [ o ] represents the inverse of the binary encoding of the o-th bit of M, i.e., 0 becomes 1 and 1 becomes 0; satisfying the condition that o is more than or equal to 1 and less than or equal to M max;
The variation of the center vector C randomly selects a cross point u, and then adds a random number which obeys normal distribution to the u-th element to generate a new C vector expressed as:
C′=[c1,c2,…,cu+δ,…,cM′]
Wherein, C' is a new C vector; delta is a random number which obeys normal distribution N (0, sigma c); σ c is a small standard deviation for controlling the amplitude of the variation; satisfying the condition that o is more than or equal to 1 and less than or equal to M';
variation of the width parameter sigma, randomly selecting a cross point r, and then adding a random number compliant with a normal distribution to the r-th element to generate a new sigma vector expressed as:
∑′=[σ12,…,σr+δ,…,σM′]
Wherein, sigma' is a new Sigma vector; delta is a random number which obeys normal distribution N (0, sigma σ); σ σ is a small standard deviation for controlling the amplitude of the variation; satisfying r is more than or equal to 1 and less than or equal to M';
the variation of the connection weight W randomly selects a variation point v, then adds a random number which is compliant with normal distribution to the v-th element to generate a new W vector, which is expressed as:
W′=[w11,w12,…,wv+δ,…,w3M′]
Wherein W' is a new W vector; delta is a random number conforming to normal distribution N (0, sigma w), sigma w is a smaller standard deviation for controlling the amplitude of variation; satisfy v of 1 to less than or equal to v not more than 3M';
bias B variation, randomly selecting a variation point z, and then adding a random number compliant with normal distribution to the z-th element to generate a new B vector expressed as:
B′=[b1,b2,…,bz+δ,…,b3]
wherein B' is a new B vector; delta is a random number conforming to normal distribution N (0, sigma b), sigma b is a smaller standard deviation for controlling the amplitude of variation; satisfying z is more than or equal to 1 and less than or equal to 3;
stopping evolution if the maximum evolution algebra G max is reached or the adaptability change of the population is smaller than a set threshold value, and outputting the parameters and the adaptability of the optimal individual and the corresponding radial basis function neural network model; otherwise, let g=g+1, return to calculate fitness, continue evolution.
The invention also provides an abrasive jet cold cutting quality prediction system, which comprises:
The experimental data collection module is used for collecting cutting experimental data; the experimental data comprise cutting process parameters and cutting processing quality indexes;
The normalization processing dividing module is used for carrying out normalization processing on the experimental data and dividing the experimental data into a training set and a testing set;
The radial basis function network construction module is used for constructing a radial basis function network model; the radial basis function neural network model input layer is a cutting process parameter, the output layer is a cutting processing quality index, the activation function is a Gaussian function and the loss function is a mean square error;
The network parameter optimization module is used for optimizing radial basis function neural network model parameters by using a genetic algorithm; the parameters comprise the number of hidden layer neurons, a center vector, a width, a connection weight and bias;
The model performance evaluation module is used for evaluating the prediction performance of the radial basis function neural network model by using the test set and calculating the error between the prediction value and the actual value; judging whether the error accords with an error range, and if so, outputting a radial basis function neural network model; if the error range is not met, retraining the radial basis function neural network model.
Optionally, the experimental data collection module specifically includes:
The cross-sectional area a of the abrasive jet is expressed as:
Wherein d is the nozzle diameter; alpha is the half apex angle of the jet; l is the target distance;
The velocity V of the abrasive jet is expressed as:
wherein Q is the abrasive jet flow; v is jet velocity;
The kinetic energy E k of the jet is expressed as:
Wherein E k is the kinetic energy of abrasive jet flow; ρ is the abrasive jet density;
the pressure P of the jet is expressed as:
Wherein P is abrasive jet pressure;
The impact force F of the jet is expressed as:
Wherein F is the impact force of abrasive jet; θ is the angle between the axis and the surface of the cut material;
The loss level LR is expressed as:
Wherein LR is loss degree; m s is the mass of the sample after cutting; m a is the mass of abrasive material consumed from the nozzle during cutting; m n is the mass of the nozzle worn during cutting; ρ a is the abrasive density; ρ n is the nozzle density; t is cutting time; the E is the wear coefficient of the nozzle; m 0 is the mass of the material before cutting, m 0=ms+ma+mn;
The energy consumption EC is expressed as:
Wherein EC is energy consumption; w e is the electric energy consumed in the cutting process; w w is water energy consumed in the cutting process, and eta e is electric energy conversion efficiency; ρ w is the density of water; c w is the specific heat capacity of water; delta T is the temperature rise of water;
the cutting accuracy CP is expressed as:
In the formula, CP is cutting precision, and refers to similarity between a cut after cutting and an expected cut; Δl is the deviation of the cut length from the expected length after cutting; Δw is the deviation of the cut width from the intended width after cutting; r a is the kerf surface roughness after dicing.
Optionally, the normalization processing dividing module specifically includes:
wherein N is the number of experimental data; x is a cutting process parameter matrix; y is a cutting processing quality index matrix;
Data normalization is expressed as:
Wherein, x ij and y ij are respectively the cutting process parameter and the cutting processing quality index of the ith sample; x 'ij and y' ij are respectively the cutting process parameter and the cutting processing quality index of the ith sample after normalization; x j and y j are the cutting process parameter and the cutting quality index of the j-th column respectively; min (x j) and max (x j) are the minimum and maximum values, respectively, of the cutting process parameters of the j-th column; min (y j) and max (y j) represent the minimum and maximum values of the cutting quality index of the j-th column, respectively.
Optionally, the radial base network construction module specifically includes:
The radial basis function neural network model input layer is a cutting process parameter; the output layer is a cutting processing quality index; the number of neurons of the hidden layer is M;
the activation function of the hidden layer is expressed as:
Wherein x is the data of the input layer, namely the normalized cutting process parameters; c I is the center vector of the I-th hidden layer neuron; σ I is the width parameter of the I-th hidden layer neuron; phi is a radial basis function, i.e., a Gaussian function; the term "I". I "is the Euclidean norm;
The output function of the output layer is expressed as:
Wherein y J (x) is the output value of the neuron of the J-th output layer, namely the normalized cutting processing quality index; w IJ is the connection weight of the I-th hidden layer neuron to the J-th output layer neuron; b J is the bias of the J-th output layer neuron;
the loss function is expressed as:
Wherein y kJ is the actual value of the J-th cutting quality index of the k-th sample; A predicted value of a J-th cut quality index for a k-th sample; the number of training set samples, i.e., the number of portions of experimental data.
Optionally, the network parameter optimization module specifically includes:
The parameter initialization submodule is used for initializing parameters, setting population scale as S, maximum evolution algebra as G max, initial crossover probability as P c, initial mutation probability as P m, elite individual number as E, current algebra as g=0, upper limit of hidden layer neuron number as M max, value range of central vector and width parameter as [0,1], and value range of connection weight and bias as [ -1,1];
The initial population generation sub-module is used for generating an initial population, randomly generating S individuals, wherein each individual consists of the number M of hidden layer neurons, a center vector C= [ C 1,c2,...,cM ], a width parameter sigma= [ sigma 12,…,σM ], a connection weight W= [ W 11,w12,…,wM3 ] and a bias B= [ B 1,b2,b3 ], and the expression is as follows:
Xs=[M,C,∑,W,B],s=1,2,…,S
wherein X s is the s-th individual; m is an integer, and M is more than or equal to 1 and less than or equal to M max; C. sigma, W and B are real vectors, and satisfy 0.ltoreq.C, sigma.ltoreq.1, -1.ltoreq.W, B.ltoreq.1;
The fitness calculation sub-module is used for calculating fitness, for each individual X s, constructing RBF neural network according to parameters thereof, calculating a Loss function Loss s by using a training set, and then converting the Loss function Loss s into a fitness function F s, wherein the fitness function F s is expressed as:
wherein iota is a small positive number for avoiding zero denominator;
The selection sub-module is used for performing selection operation, selecting S individuals to enter the next generation by using a roulette method, and simultaneously reserving E elite individuals, namely E individuals with highest fitness, which do not participate in crossover and mutation operation and are directly copied to the next generation;
the cross submodule is used for performing cross operation, and the adaptive cross probability is used and expressed as:
Wherein P c,s is the s-th individual crossover probability; g is the current algebra; g max is the maximum algebra;
For each pair of adjacent individuals, performing crossover operation according to crossover probability, namely exchanging part or all parameters, and generating two new individuals, wherein the method specifically comprises the following steps:
the intersection of the hidden layer neuron numbers M randomly selects one intersection p, and then swaps the first p-bit binary codes of M of two individuals to generate two new M values, expressed as:
M′1=M1[1:p]+M2[p+1:Mmax]
M′2=M2[1:p]+M1[p+1:Mmax]
Wherein M 1 and M 2 are the original M values of two individuals; m '1 and M' 2 are new M values for two individuals; m [ l: ζ represents the binary encoding of the first to ζ bits of M; satisfy p is more than or equal to 1M max -1 or less;
The intersection of the center vectors C randomly selects one intersection point a, then swaps the first a elements of the two individual C, generating two new C vectors, denoted as:
Wherein, C '1 and C' 2 are new C vectors for two individuals; c represents the B-th center vector element of the s-th individual; a is more than or equal to 1 and less than or equal to M min;Mmin and M min=min(M′1,M′2);
intersection of the width parameters sigma, randomly selecting one intersection r, then exchanging the first r elements of the two individual sigma, generating two new sigma vectors, denoted as:
Where Σ '1 and Σ' 2 are the new Σ vectors of two individuals; σ represents the gamma-th center vector element of the s-th individual; r is more than or equal to 1 and less than or equal to M min;
the intersection of the connection weights W randomly selects one intersection f, and then swaps the first f elements of W of two individuals, generating two new W vectors, expressed as:
Wherein W '1 and W' 2 are new W vectors for two individuals; w sIJ is the connection weight of the ith hidden layer neuron to the jth output layer neuron representing the s-th individual; f is more than or equal to 1 and less than or equal to 3M min;
Offset B's intersection, randomly selecting one intersection h, then exchanging the first h elements of B for two individuals, generating two new B vectors, expressed as:
B′1=[b11,b12,…,b1h,b2h+1,b2h+2,…,b23]
B′2=[b21,b22,…,b2h,b1h+1,b1h+2,…,b13]
Wherein B '1 and B' 2 are new B vectors for two individuals; b sJ is the bias of the J-th output layer neuron representing the s-th individual; h is more than or equal to 1 and less than or equal to 3;
the mutation submodule is used for performing mutation operation, and the adaptive mutation probability is used for representing as:
Wherein P m,s is the probability of variation of the s-th individual; g is the current algebra; g max is the maximum algebra;
For each individual, performing mutation operation according to mutation probability, namely performing tiny disturbance on part or all parameters to generate a new individual, wherein the method specifically comprises the following steps:
The variation of the hidden layer neuron number M randomly selects one cross point o, and then the binary code of the o bit is turned over to generate a new M value, which is expressed as:
M′=M[1:o-1]+M[o]+M[o+1:Mmax]
Wherein M' is a new M value; m [ o ] represents the inverse of the binary encoding of the o-th bit of M, i.e., 0 becomes 1 and 1 becomes 0; satisfying the condition that o is more than or equal to 1 and less than or equal to M max;
The variation of the center vector C randomly selects a cross point u, and then adds a random number which obeys normal distribution to the u-th element to generate a new C vector expressed as:
C′=[c1,c2,…,cu+δ,…,cM′]
Wherein, C' is a new C vector; delta is a random number which obeys normal distribution N (0, sigma c); σ c is a small standard deviation for controlling the amplitude of the variation; satisfying the condition that o is more than or equal to 1 and less than or equal to M';
variation of the width parameter sigma, randomly selecting a cross point r, and then adding a random number compliant with a normal distribution to the r-th element to generate a new sigma vector expressed as:
∑′=[σ12,…,σr+δ,…,σM′]
Wherein, sigma' is a new Sigma vector; delta is a random number which obeys normal distribution N (0, sigma σ); σ σ is a small standard deviation for controlling the amplitude of the variation; satisfying r is more than or equal to 1 and less than or equal to M';
the variation of the connection weight W randomly selects a variation point v, then adds a random number which is compliant with normal distribution to the v-th element to generate a new W vector, which is expressed as:
W′=[w11,w12,…,wv+δ,…,w3M′]
Wherein W' is a new W vector; delta is a random number conforming to normal distribution N (0, sigma w), sigma w is a smaller standard deviation for controlling the amplitude of variation; satisfy v of 1 to less than or equal to v not more than 3M';
bias B variation, randomly selecting a variation point z, and then adding a random number compliant with normal distribution to the z-th element to generate a new B vector expressed as:
B′=[b1,b2,…,bz+δ,…,b3]
wherein B' is a new B vector; delta is a random number conforming to normal distribution N (0, sigma b), sigma b is a smaller standard deviation for controlling the amplitude of variation; satisfying z is more than or equal to 1 and less than or equal to 3;
the condition termination submodule is used for terminating the condition, and if the maximum evolution algebra G max is reached or the adaptability change of the population is smaller than a set threshold value, stopping the evolution, and outputting the parameters and the adaptability of the optimal individual and the corresponding radial basis neural network model; otherwise, let g=g+1, return to calculate fitness, continue evolution.
Compared with the prior art, the invention has the following beneficial effects:
According to the invention, a Radial Basis Function (RBF) neural network is adopted as a data model, so that nonlinear, high-dimensional and complex data relationships can be effectively fitted, and the prediction accuracy of the cutting quality is improved; genetic (GA) is adopted as an optimization algorithm, so that a local optimal solution can be effectively avoided, a global optimal solution is searched, and generalization capability and stability of the model are improved; by adopting normalization processing and error range judgment, the dimension and scale influence of data can be effectively eliminated, and the reliability and the robustness of the model are improved;
drawings
FIG. 1 is a flow chart of a method for predicting the cold cutting quality of abrasive jet according to the present invention;
FIG. 2 is a block diagram of an abrasive jet cold cutting quality prediction system according to the present invention.
Detailed Description
The invention is further described below in connection with specific embodiments and the accompanying drawings, but the invention is not limited to these embodiments.
Example 1
As shown in fig. 1, the invention discloses a method for predicting the cold cutting quality of abrasive jet, which comprises the following steps:
Step S1: collecting cutting experiment data; the experimental data includes cutting process parameters and cutting quality indicators.
Step S2: and carrying out normalization processing on experimental data, and dividing the experimental data into a training set and a testing set.
Step S3: constructing a radial basis function neural network model; the radial basis function neural network model has an input layer of cutting process parameters, an output layer of cutting processing quality indexes, an activation function of a Gaussian function and a loss function of a mean square error.
Step S4: optimizing radial basis function neural network parameters using a genetic algorithm; parameters include hidden layer neuron number, center vector, width, connection weight, and bias.
Step S5: evaluating the prediction performance of the radial basis function neural network by using a test set, and calculating an error between a predicted value and an actual value; judging whether the error accords with the error range, and if so, outputting a radial basis function neural network model; if the error range is not met, retraining the radial basis function network.
The steps are discussed in detail below:
Step S1: collecting cutting experiment data; the experimental data includes cutting process parameters and cutting quality indicators.
The step S1 specifically comprises the following steps:
In the abrasive jet structure, the jet is conical, the vertex of the jet is positioned at the outlet of the nozzle, the half vertex angle of the jet is alpha, and the included angle between the axis of the jet and the surface of the material to be cut is theta.
The cross-sectional area a of the abrasive jet is expressed as:
wherein d is the nozzle diameter; this formula shows that the cross-sectional area of the jet increases with increasing target distance, but the rate of increase decreases with increasing target distance; when the target distance is zero, the cross-sectional area of the jet is equal to the cross-sectional area of the nozzle, i.e. a=pi d 2/4; when the target distance tends to be infinitely large, the sectional area of the jet flow tends to be infinite, that is, A.fwdarw.infinity.
The velocity V of the abrasive jet is expressed as:
wherein Q is the abrasive jet flow; v is jet velocity.
The kinetic energy E k of the jet is expressed as:
wherein E k is the kinetic energy of abrasive jet flow; m is abrasive jet mass; ρ is the abrasive jet density.
The pressure P of the jet is expressed as:
Wherein P is the abrasive jet pressure.
The impact force F of the jet is expressed as:
wherein F is the impact force of abrasive jet.
The loss level LR is expressed as:
Wherein LR is loss degree; m s is the mass of the sample after cutting; m a is the mass of abrasive material consumed from the nozzle during cutting; m n is the mass of the nozzle worn during cutting; ρ a is the abrasive density; ρ n is the nozzle density; t is cutting time; the E is the wear coefficient of the nozzle; m 0 is the mass of the material before cutting, m 0=ms+ma+mn.
The energy consumption EC is expressed as:
Wherein EC is energy consumption; w e is the electric energy consumed in the cutting process; w w is water energy consumed in the cutting process, and eta e is electric energy conversion efficiency; ρ w is the density of water; c w is the specific heat capacity of water; delta T is the temperature rise of water.
The cutting accuracy CP is expressed as:
In the formula, CP is cutting precision, and refers to similarity between a cut after cutting and an expected cut; Δl is the deviation of the cut length from the expected length after cutting; Δw is the deviation of the cut width from the intended width after cutting; r a is the surface roughness of the cut; f is the jet impact force of the abrasive; alpha is the half apex angle of the jet; d is the particle size of the abrasive; d is the nozzle diameter; v is jet velocity; l is the target distance.
In the embodiment, the cutting process parameters comprise jet pressure P, abrasive flow Q, abrasive grain diameter D, nozzle diameter D, jet speed V, target distance L, abrasive jet impact force F, jet half-apex angle alpha, cutting included angle theta and the like; the cutting quality index includes the degree of loss, energy consumption and cutting accuracy.
Step S2: and carrying out normalization processing on experimental data, and dividing the experimental data into a training set and a testing set.
The step S2 specifically comprises the following steps:
wherein N is the number of experimental data; x is a cutting process parameter matrix; y is a cutting processing quality index matrix.
Data normalization is expressed as:
Wherein, x ij and y ij are respectively the cutting process parameter and the cutting processing quality index of the ith sample; x 'ij and y' ij are respectively the cutting process parameter and the cutting processing quality index of the ith sample after normalization; x j and y j are the cutting process parameter and the cutting quality index of the j-th column respectively; min (x j) and max (x j) are the minimum and maximum values, respectively, of the cutting process parameters of the j-th column; min (y j) and max (y j) represent the minimum and maximum values of the cutting quality index of the j-th column, respectively.
The division into training and testing sets is to enable Radial Basis (RBF) neural networks and Genetic (GA) algorithms to learn and optimize on one part of the data and evaluate and verify on another part of the data. Generally, the training set accounts for a large portion of the data and the test set accounts for a small portion of the data; the specific ratio is determined according to the data amount and the demand.
Step S3: constructing a radial basis function neural network model; the radial basis function neural network model has an input layer of cutting process parameters, an output layer of cutting processing quality indexes, an activation function of a Gaussian function and a loss function of a mean square error.
The step S3 specifically comprises the following steps:
Constructing a radial basis function neural network model (RBF), wherein an input layer of the radial basis function neural network model is a cutting process parameter, and nine neurons are arranged, wherein the nine neurons comprise jet pressure P, abrasive flow Q, abrasive grain diameter D, nozzle diameter D, jet speed V, target distance L, abrasive jet impact force F, jet half-apex angle a, cutting included angle theta and the like; the neuron of the input layer is specifically set in combination with the actual situation; the output layer is a cutting processing quality index, and three neurons are arranged and respectively correspond to the three cutting processing quality indexes; the number of neurons of the hidden layer is M, and can be determined through GA algorithm optimization.
The activation function of the hidden layer is a gaussian function, expressed as:
Wherein x is the data of the input layer, namely the normalized cutting process parameters; c I is the center vector of the I-th hidden layer neuron; σ I is the width parameter of the I-th hidden layer neuron; phi is a radial basis function, i.e., a Gaussian function; the term "I". I "is the Euclidean norm.
The output function of the output layer is a linear function, expressed as:
Wherein y J (x) is the output value of the neuron of the J-th output layer, namely the normalized cutting processing quality index; w IJ is the connection weight of the I-th hidden layer neuron to the J-th output layer neuron; b J is the bias of the J-th output layer neuron.
The loss function is the mean square error, expressed as:
Wherein y kJ is the actual value of the J-th cutting quality index of the k-th sample; A predicted value of a J-th cut quality index for a k-th sample; the training set sample number is part of experimental data; the goal is to minimize the Loss function Loss by optimizing the parameters of the RBF neural network.
Step S4: optimizing parameters of the radial basis function neural network using an improved genetic algorithm; parameters include hidden layer neuron number, center vector, width, connection weight, and bias.
The step S4 specifically comprises the following steps:
Initializing parameters, setting population scale as S, maximum evolution algebra as G max, initial crossover probability as P c, initial mutation probability as P m, elite individual number as E, current algebra as g=0, upper limit of hidden layer neuron number as M max, value range of central vector and width parameters as [0,1], and value range of connection weight and bias as [ 1,1].
Generating an initial population, randomly generating S individuals, wherein each individual consists of the number M of hidden layer neurons, a central vector C= [ C 1,c2,...,cM ], a width parameter sigma= [ sigma 12,…,σM ], a connection weight W= [ W 11,w12,…,wM3 ] and a bias B= [ B 1,b2,b3 ], and the expression is as follows:
Xs=[M,C,∑,W,B],s=1,2,…,S
Wherein X s is the s-th individual; m is an integer, and M is more than or equal to 1 and less than or equal to M max; C. sigma, W and B are real vectors, and satisfy 0.ltoreq.C, sigma.ltoreq.1, -1.ltoreq.W, and B.ltoreq.1.
Fitness is calculated, for each individual X s, an RBF neural network is constructed from its parameters, its Loss function Loss s is calculated using a training set, and then it is converted into fitness function F s, expressed as:
wherein iota is a small positive number for avoiding zero denominator; the larger the fitness function, the higher the quality of the individual.
And selecting S individuals to enter the next generation by using a roulette method, and simultaneously reserving E elite individuals, namely E individuals with highest fitness, without participating in crossing and mutation operations, and directly copying to the next generation.
Crossover operation, using adaptive crossover probabilities, is expressed as:
Wherein P c,s is the s-th individual crossover probability; g is the current algebra; g max is the maximum algebra.
The adaptive crossover probability decreases as the algebra increases to maintain population diversity. For each pair of adjacent individuals, performing crossover operation according to crossover probability, namely exchanging part or all parameters, and generating two new individuals, wherein the method specifically comprises the following steps:
I. Crossover of hidden layer neuron number M: one intersection p is randomly selected and then the first p bits of binary codes of the two individual M are swapped to generate two new M values, expressed as:
M′1=M1[1:p]+M2[p+1:Mmax]
M′2=M2[1:p]+M1[p+1:Mmax]
Wherein M 1 and M 2 are the original M values of two individuals; m '1 and M' 2 are new M values for two individuals; m [ l: ζ represents the binary encoding of the first to ζ bits of M; satisfy p is more than or equal to 1M max -1 is not more than.
II. Intersection of center vector C: one intersection a is randomly selected and then the first a elements of the two individual cs are swapped to generate two new C vectors, expressed as:
wherein, C '1 and C' 2 are new C vectors for two individuals; c represents the β center vector element of the s-th individual; a is more than or equal to 1 and less than or equal to M min;Mmin and M min=min(M′1,M′2).
III, crossing of width parameter Σ: one intersection r is randomly selected and then the first r elements of the sigma of the two individuals are exchanged, generating two new sigma vectors, denoted:
Where Σ '1 and Σ' 2 are the new Σ vectors of two individuals; σ represents the gamma-th center vector element of the s-th individual; r is more than or equal to 1 and less than or equal to M min.
IV, crossing of connection weights W: one intersection f is randomly selected and then the first f elements of the W of the two individuals are swapped to generate two new W vectors, expressed as:
Wherein W '1 and W' 2 are new W vectors for two individuals; w sIJ is the connection weight of the ith hidden layer neuron to the jth output layer neuron representing the s-th individual; f is more than or equal to 1 and less than or equal to 3M min.
V, crossing of bias B: one intersection h is randomly selected and then the first h elements of B of two individuals are swapped to generate two new B vectors, expressed as:
B′1=[b11,b12,…,b1h,b2h+1,b2h+2,…,b23]
B′2=[b21,b22,…,b2h,b1h+1,b1h+2,…,b13]
Wherein B '1 and B' 2 are new B vectors for two individuals; b sJ is the bias of the J-th output layer neuron representing the s-th individual; h is more than or equal to 1 and less than or equal to 3.
A mutation operation, using an adaptive mutation probability, expressed as:
wherein P m,s is the probability of variation of the s-th individual; g is the current algebra; g max is the maximum algebra.
The probability of adaptive mutation decreases with increasing algebra to maintain diversity of the population. For each individual, performing mutation operation according to mutation probability, namely performing tiny disturbance on part or all parameters to generate a new individual, wherein the method specifically comprises the following steps:
(1) Variation of the number M of hidden neurons: randomly selecting a cross point o, and then turning over the binary code of the o-th bit to generate a new M value, which is expressed as:
M′=M[1:o-1]+M[o]+M[o+1:Mmax]
Wherein M' is a new M value; m [ o ] represents the inverse of the binary encoding of the o-th bit of M, i.e., 0 becomes 1 and 1 becomes 0; satisfies that o is more than or equal to 1 and less than or equal to M max.
(2) Variation of center vector C: a cross point u is randomly selected, and then a random number which is subjected to normal distribution is added to the u-th element, so that a new C vector is generated, and the new C vector is expressed as:
C′=[c1,c2,…,cu+δ,…,cM′]
wherein, C' is a new C vector; delta is a random number which obeys normal distribution N (0, sigma c); σ c is a small standard deviation for controlling the amplitude of the variation; satisfies that o is more than or equal to 1 and less than or equal to M'.
(3) Variation of width parameter Σ: a cross point r is randomly selected, and then a random number which is subjected to normal distribution is added to the r element to generate a new sigma vector, which is expressed as:
∑′=[σ12,…,σr+δ,…,σM′]
Wherein, sigma' is a new Sigma vector; delta is a random number which obeys normal distribution N (0, sigma σ); σ σ is a small standard deviation for controlling the amplitude of the variation; satisfying r is more than or equal to 1 and less than or equal to M'.
(4) Variation of the connection weight W: randomly selecting a variation point v, then adding a random number which is subjected to normal distribution to the v-th element, and generating a new W vector which is expressed as:
W′=[w11,w12,…,wv+δ,…,w3M′]
wherein W' is a new W vector; delta is a random number conforming to normal distribution N (0, sigma w), sigma w is a smaller standard deviation for controlling the amplitude of variation; satisfy v of 1 to less than or equal to v not more than 3M'.
(5) Variation of bias B: randomly selecting a variation point z, and then adding a random number obeying normal distribution to the z-th element to generate a new B vector expressed as:
B′=[b1,b2,…,bz+δ,…,b3]
Wherein B' is a new B vector; delta is a random number conforming to normal distribution N (0, sigma b), sigma b is a smaller standard deviation for controlling the amplitude of variation; satisfying z is more than or equal to 1 and less than or equal to 3.
If the maximum evolution algebra G max is reached, or the adaptability change of the population is smaller than a set threshold, or the program is terminated in advance, stopping the evolution, and outputting the parameters and the adaptability of the optimal individual and the corresponding RBF neural network model; otherwise, let g=g+1, return to calculate fitness, continue evolution.
Step S5: evaluating the prediction performance of the radial basis function neural network by using a test set, and calculating an error between a predicted value and an actual value; judging whether the error accords with the error range, and if so, outputting a radial basis function neural network model; if the error range is not met, retraining the radial basis function network.
The step S5 specifically comprises the following steps:
Substituting the data of the test set into parameters and fitness of the optimal individual and a corresponding RBF neural network model to obtain a predicted value; performing inverse normalization on the predicted value to obtain an original predicted value, wherein the method specifically comprises the following steps:
Wherein min (y j) and max (y j) are the minimum value and the maximum value of the cutting quality index of the j-th column, respectively; Is a predicted value; Is the original predicted value.
Calculating a predicted valueAnd the mean absolute error, root mean square error, and mean absolute percent error between the actual values y J (x), expressed as:
Wherein MAE J is the average absolute error of the J-th cutting quality index; RMSE J is the root mean square error of the J-th cut quality index; MAPE J is the average absolute percentage error of the J-th cutting quality index; For a test set sample; And y JA (x) represent the predicted value and the actual value of the J-th cut quality index of the a-th sample, respectively.
The method for judging whether the error meets the requirement by using the weighted comprehensive evaluation method specifically comprises the following steps:
Firstly, setting a threshold value for each error type (average absolute error, root mean square error and average absolute percentage error) respectively, wherein the threshold value represents acceptable error; for each error type of each index, its normalized error relative to the corresponding threshold is then calculated, expressed as:
Wherein E J,et is the normalized error of the et-th error type of the J-th cutting machining quality index; error J,et is the actual error of the et-th error type of the J-th cut quality index, error J,et includes MAE J、RMSEJ and MAPE J;thresholdet as thresholds of the et-th error type.
The weighted error is then calculated for each index, expressed as:
wherein Γ J is the weighted error of the J-th cutting quality index; is the weight of the type of error of the et.
Finally, analyzing the size of each index error gamma J, if gamma J meets the expected precision requirement, indicating that the RBF neural network has good prediction performance, is used for the prediction control of the cutting processing quality, and outputs an RBF neural network model; if Γ J is too large, which indicates that the generalization capability of the RBF neural network is insufficient, parameters or structures of the network need to be adjusted, or the data volume of a training set is increased, and the RBF neural network is retrained.
In this embodiment, the abrasive jet cold cutting robot is adopted to cut subsequently, and the cutting robot is an intelligent cutting device that utilizes robot technique and cutting technique to combine together, and it can accomplish the cutting processing of various shapes and sizes automatically according to different cutting demands, improves cutting efficiency and quality, reduces cutting cost and manpower resources. And sending the cutting technological parameters to a control system of the cutting robot, and controlling the movement of the cutting robot and the work of cutting equipment by the control system according to the parameter instructions to finish cutting processing. In the cutting process, the cutting state is monitored in real time, cutting process parameters are input into the RBF neural network to obtain a real-time predicted value, the real-time predicted value is compared with an expected target value, and a real-time predicted error is calculated. If the prediction error is within the allowable error range, the cutting processing quality is up to the requirement, and the cutting is continued; if the prediction error exceeds the error range, the quality of the cutting processing is unqualified, the parameters of the cutting process are required to be adjusted and then are sent to a control system of the cutting robot, and the movement of the cutting robot and the operation of the cutting equipment are adjusted until the prediction error is within the error range.
Example 2
As shown in fig. 2, the present invention discloses an abrasive jet cold cutting quality prediction system, which comprises:
an experimental data collection module 10 for collecting cutting experimental data; the experimental data includes cutting process parameters and cutting quality indicators.
The normalization processing dividing module 20 is configured to perform normalization processing on the experimental data, and divide the experimental data into a training set and a testing set.
A radial basis network construction module 30 for constructing a radial basis neural network model; the radial basis function neural network model has an input layer of cutting process parameters, an output layer of cutting processing quality indexes, an activation function of a Gaussian function and a loss function of a mean square error.
A network parameter optimization module 40 for optimizing radial basis function neural network model parameters using a genetic algorithm; parameters include hidden layer neuron number, center vector, width, connection weight, and bias.
A model performance evaluation module 50 for evaluating the predicted performance of the radial basis function network model using the test set, calculating an error between the predicted value and the actual value; judging whether the error accords with the error range, and if so, outputting a radial basis function neural network model; if the error range is not met, retraining the radial basis function neural network model.
As an alternative embodiment, the experimental data collection module 10 of the present invention specifically includes:
The cross-sectional area a of the abrasive jet is expressed as:
wherein d is the nozzle diameter; alpha is the half apex angle of the jet; l is the target distance.
The velocity V of the abrasive jet is expressed as:
wherein Q is the abrasive jet flow; v is jet velocity.
The kinetic energy E k of the jet is expressed as:
wherein E k is the kinetic energy of abrasive jet flow; ρ is the abrasive jet density.
The pressure P of the jet is expressed as:
Wherein P is the abrasive jet pressure.
The impact force F of the jet is expressed as:
Wherein F is the impact force of abrasive jet; θ is the angle between the axis and the surface of the material being cut.
The loss level LR is expressed as:
Wherein LR is loss degree; m s is the mass of the sample after cutting; m a is the mass of abrasive material consumed from the nozzle during cutting; mn is the mass of the nozzle worn during cutting; ρ a is the abrasive density; ρ n is the nozzle density; t is cutting time; the E is the wear coefficient of the nozzle; m 0 is the mass of the material before cutting, m 0=ms+ma+mn.
The energy consumption EC is expressed as:
Wherein EC is energy consumption; w e is the electric energy consumed in the cutting process; w w is water energy consumed in the cutting process, and eta e is electric energy conversion efficiency; ρ w is the density of water; c w is the specific heat capacity of water; delta T is the temperature rise of water.
The cutting accuracy CP is expressed as:
In the formula, CP is cutting precision, and refers to similarity between a cut after cutting and an expected cut; Δl is the deviation of the cut length from the expected length after cutting; Δw is the deviation of the cut width from the intended width after cutting; r a is the kerf surface roughness after dicing.
As an alternative embodiment, the normalization processing partitioning module 20 of the present invention specifically includes:
wherein N is the number of experimental data; x is a cutting process parameter matrix; y is a cutting processing quality index matrix.
Data normalization is expressed as:
Wherein, x ij and y ij are respectively the cutting process parameter and the cutting processing quality index of the ith sample; x 'ij and y ij' are respectively the cutting process parameter and the cutting processing quality index of the ith sample after normalization; x j and y j are the cutting process parameter and the cutting quality index of the j-th column respectively; min (x j) and max (x j) are the minimum and maximum values, respectively, of the cutting process parameters of the j-th column; min (y j) and max (y j) represent the minimum and maximum values of the cutting quality index of the j-th column, respectively.
As an alternative embodiment, the radial base network construction module 30 of the present invention specifically includes:
The radial basis function neural network model input layer is a cutting process parameter; the output layer is a cutting processing quality index; the number of neurons in the hidden layer is M.
The activation function of the hidden layer is expressed as:
Wherein x is the data of the input layer, namely the normalized cutting process parameters; c I is the center vector of the I-th hidden layer neuron; σ I is the width parameter of the I-th hidden layer neuron; phi is a radial basis function, i.e., a Gaussian function; the term "I". I "is the Euclidean norm.
The output function of the output layer is expressed as:
Wherein y J (x) is the output value of the neuron of the J-th output layer, namely the normalized cutting processing quality index; w IJ is the connection weight of the I-th hidden layer neuron to the J-th output layer neuron; b J is the bias of the J-th output layer neuron.
The loss function is expressed as:
Wherein y kJ is the actual value of the J-th cutting quality index of the k-th sample; A predicted value of a J-th cut quality index for a k-th sample; the number of training set samples, i.e., the number of portions of experimental data.
As an alternative embodiment, the network parameter optimization module 40 of the present invention specifically includes:
The parameter initialization submodule is used for initializing parameters, setting population scale as S, maximum evolution algebra as G max, initial crossover probability as P c, initial mutation probability as P m, elite individual number as B, current algebra as g=0, upper limit of hidden layer neuron number as M max, value ranges of central vector and width parameters as [0,1], and value ranges of connection weight and bias as [ -1,1].
The initial population generation sub-module is used for generating an initial population, randomly generating S individuals, wherein each individual consists of the number M of hidden layer neurons, a center vector C= [ C 1,c2,...,cM ], a width parameter sigma= [ sigma 12,…,σM ], a connection weight W= [ W 11,w12,…,wM3 ] and a bias B= [ B 1,b2,b3 ], and the expression is as follows:
Xs=[M,C,∑,W,B],s=1,2,…,S
Wherein X s is the s-th individual; m is an integer, and M is more than or equal to 1 and less than or equal to M max; C. sigma, W and B are real vectors, and satisfy 0.ltoreq.C, sigma.ltoreq.1, -1.ltoreq.W, and B.ltoreq.1.
The fitness calculation sub-module is used for calculating fitness, for each individual X s, constructing RBF neural network according to parameters thereof, calculating a Loss function Loss s by using a training set, and then converting the Loss function Loss s into a fitness function F s, wherein the fitness function F s is expressed as:
Where iota is a small positive number to avoid zero denominator.
And the selection sub-module is used for performing selection operation, selecting S individuals to enter the next generation by using a roulette method, and simultaneously reserving E elite individuals, namely E individuals with highest fitness, without participating in crossover and mutation operation, and directly copying the E elite individuals to the next generation.
The cross submodule is used for performing cross operation, and the adaptive cross probability is used and expressed as:
Wherein P c,s is the s-th individual crossover probability; g is the current algebra; g max is the maximum algebra.
For each pair of adjacent individuals, performing crossover operation according to crossover probability, namely exchanging part or all parameters, and generating two new individuals, wherein the method specifically comprises the following steps:
the intersection of the hidden layer neuron numbers M randomly selects one intersection p, and then swaps the first p-bit binary codes of M of two individuals to generate two new M values, expressed as:
M′1=M1[1:p]+M2[p+1:Mmax]
M′2=M2[1:p]+M1[p+1:Mmax]
Wherein M 1 and M 2 are the original M values of two individuals; m '1 and M' 2 are new M values for two individuals; m [ l: ζ represents the binary encoding of the first to ζ bits of M; satisfy p is more than or equal to 1M max -1 is not more than.
The intersection of the center vectors C randomly selects one intersection point a, then swaps the first a elements of the two individual C, generating two new C vectors, denoted as:
wherein, C '1 and C' 2 are new C vectors for two individuals; c represents the B-th center vector element of the s-th individual; a is more than or equal to 1 and less than or equal to M min;Mmin and M min=min(M′1,M′2).
Intersection of the width parameters sigma, randomly selecting one intersection r, then exchanging the first r elements of the two individual sigma, generating two new sigma vectors, denoted as:
Where Σ '1 and Σ' 2 are the new Σ vectors of two individuals; σ represents the gamma-th center vector element of the s-th individual; r is more than or equal to 1 and less than or equal to M min.
The intersection of the connection weights W randomly selects one intersection f, and then swaps the first f elements of W of two individuals, generating two new W vectors, expressed as:
Wherein W '1 and W' 2 are new W vectors for two individuals; w sIJ is the connection weight of the ith hidden layer neuron to the jth output layer neuron representing the s-th individual; f is more than or equal to 1 and less than or equal to 3M min.
Offset B's intersection, randomly selecting one intersection h, then exchanging the first h elements of B for two individuals, generating two new B vectors, expressed as:
B′1=[b11,b12,…,b1h,b2h+1,b2h+2,…,b23]
B′2=[b21,b22,…,b2h,b1h+1,b1h+2,…,b13]
Wherein B '1 and B' 2 are new B vectors for two individuals; b sJ is the bias of the J-th output layer neuron representing the s-th individual; h is more than or equal to 1 and less than or equal to 3.
The mutation submodule is used for performing mutation operation, and the adaptive mutation probability is used for representing as:
wherein P m,s is the probability of variation of the s-th individual; g is the current algebra; g max is the maximum algebra.
For each individual, performing mutation operation according to mutation probability, namely performing tiny disturbance on part or all parameters to generate a new individual, wherein the method specifically comprises the following steps:
The variation of the hidden layer neuron number M randomly selects one cross point o, and then the binary code of the o bit is turned over to generate a new M value, which is expressed as:
M′=M[1:o-1]+M[o]+M[o+1:Mmax]
Wherein M' is a new M value; m [ o ] represents the inverse of the binary encoding of the o-th bit of M, i.e., 0 becomes 1 and 1 becomes 0; satisfies that o is more than or equal to 1 and less than or equal to M max.
The variation of the center vector C randomly selects a cross point u, and then adds a random number which obeys normal distribution to the u-th element to generate a new C vector expressed as:
C′=[c1,c2,…,cu+δ,…,cM′]
wherein, C' is a new C vector; delta is a random number which obeys normal distribution N (0, sigma c); σ c is a small standard deviation for controlling the amplitude of the variation; satisfies that o is more than or equal to 1 and less than or equal to M'.
Variation of the width parameter sigma, randomly selecting a cross point r, and then adding a random number compliant with a normal distribution to the r-th element to generate a new sigma vector expressed as:
∑′=[σ12,…,σr+δ,…,σM′]
Wherein, sigma' is a new Sigma vector; delta is a random number which obeys normal distribution N (0, sigma σ); σ σ is a small standard deviation for controlling the amplitude of the variation; satisfying r is more than or equal to 1 and less than or equal to M'.
The variation of the connection weight W randomly selects a variation point v, then adds a random number which is compliant with normal distribution to the v-th element to generate a new W vector, which is expressed as:
W′=[w11,w12,…,wv+δ,…,w3M′]
wherein W' is a new W vector; delta is a random number conforming to normal distribution N (0, sigma w), sigma w is a smaller standard deviation for controlling the amplitude of variation; satisfy v of 1 to less than or equal to v not more than 3M'.
Bias B variation, randomly selecting a variation point z, and then adding a random number compliant with normal distribution to the z-th element to generate a new B vector expressed as:
B′=[b1,b2,…,bz+δ,…,b3]
Wherein B' is a new B vector; delta is a random number conforming to normal distribution N (0, sigma b), sigma b is a smaller standard deviation for controlling the amplitude of variation; satisfying z is more than or equal to 1 and less than or equal to 3.
The condition termination submodule is used for terminating the condition, and if the maximum evolution algebra G max is reached or the adaptability change of the population is smaller than a set threshold value, stopping the evolution, and outputting the parameters and the adaptability of the optimal individual and the corresponding radial basis neural network model; otherwise, let g=g+1, return to calculate fitness, continue evolution.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1.一种磨料射流冷切割质量预测方法,其特征在于,所述方法包括:1. A method for predicting the quality of abrasive jet cold cutting, characterized in that the method comprises: 步骤S1:收集切割实验数据;所述实验数据包括切割工艺参数和切割加工质量指标,具体包括:Step S1: Collect cutting experimental data; the experimental data includes cutting process parameters and cutting process quality indicators, specifically including: 磨料射流的截面积A表示为:The cross-sectional area A of the abrasive jet is expressed as: 式中,d为喷嘴直径;α为射流的半顶角;L为靶距;Where, d is the nozzle diameter; α is the semi-apex angle of the jet; L is the target distance; 磨料射流的速度V表示为:The velocity V of the abrasive jet is expressed as: 式中,Q为磨料射流流量;V为射流速度;Where, Q is the abrasive jet flow rate; V is the jet velocity; 射流的动能Ek表示为:The kinetic energy E k of the jet is expressed as: 式中,Ek为磨料射流动能;ρ为磨料射流密度;Where, Ek is the kinetic energy of the abrasive jet; ρ is the density of the abrasive jet; 射流的压力P表示为:The pressure P of the jet is expressed as: 式中,P为磨料射流压力;Where, P is the abrasive jet pressure; 射流的冲击力F表示为:The impact force F of the jet is expressed as: 式中,F为磨料射流冲击力;θ为轴线与被切割材料表面夹角;Where F is the impact force of the abrasive jet; θ is the angle between the axis and the surface of the material being cut; 损耗程度LR表示为:The loss level LR is expressed as: 式中,LR为损耗程度;ms为切割后的样品质量;ma为切割过程中从喷嘴出来消耗的磨料质量;mn为切割过程中磨损的喷嘴质量;ρa为磨料密度;ρn为喷嘴密度;t为切割时间;∈为喷嘴磨损系数;m0为切割前的材料质量,m0=ms+ma+mnWherein, LR is the loss degree; ms is the sample mass after cutting; ma is the abrasive mass consumed from the nozzle during the cutting process; mn is the nozzle mass worn during the cutting process; ρa is the abrasive density; ρn is the nozzle density; t is the cutting time; ∈ is the nozzle wear coefficient; m0 is the material mass before cutting, m0 = ms +m a + mn ; 能量消耗EC表示为:Energy consumption EC is expressed as: 式中,EC为能量消耗;We为切割过程中消耗的电能;Ww为切割过程中消耗的水能,ηe为电能转换效率;ρw为水的密度;cw为水的比热容;ΔT为水的温升;Where EC is energy consumption; We is the electrical energy consumed during the cutting process; Ww is the water energy consumed during the cutting process; ηe is the electrical energy conversion efficiency; ρw is the density of water; cw is the specific heat capacity of water; ΔT is the temperature rise of water; 切割精度CP表示为:Cutting accuracy CP is expressed as: 式中,CP为切割精度,指切割后的切口与预期切口的相似度;ΔL为切割后的切口长度与预期长度的偏差;ΔW为切割后的切口宽度与预期宽度的偏差;Ra为切割后的切口表面粗糙度;D为磨料粒径;Wherein, CP is the cutting accuracy, which refers to the similarity between the cut after cutting and the expected cut; ΔL is the deviation between the cut length after cutting and the expected length; ΔW is the deviation between the cut width after cutting and the expected width; Ra is the surface roughness of the cut after cutting; D is the abrasive particle size; 步骤S2:将所述实验数据进行归一化处理,划分为训练集和测试集;Step S2: normalizing the experimental data and dividing them into a training set and a test set; 步骤S3:构建径向基神经网络模型;所述径向基神经网络模型的输入层为切割工艺参数、输出层为切割加工质量指标、激活函数为高斯函数和损失函数为均方误差;Step S3: constructing a radial basis function neural network model; the input layer of the radial basis function neural network model is the cutting process parameters, the output layer is the cutting process quality index, the activation function is the Gaussian function and the loss function is the mean square error; 步骤S4:使用遗传算法优化径向基神经网络模型参数;所述参数包括隐层神经元个数、中心向量、宽度、连接权重和偏置,具体包括:Step S4: Optimizing the radial basis neural network model parameters using a genetic algorithm; the parameters include the number of hidden layer neurons, center vector, width, connection weight and bias, specifically including: 初始化参数,设定种群规模为S,最大进化代数为Gmax,初始交叉概率为Pc,初始变异概率为Pm,精英个体数为E,当前代数为g=0,隐层神经元个数的上限为Mmax,中心向量和宽度参数的取值范围为[0,1],连接权重和偏置的取值范围为[-1,1];Initialize the parameters, set the population size to S, the maximum evolutionary generation to G max , the initial crossover probability to P c , the initial mutation probability to P m , the number of elite individuals to E , the current generation to g = 0, the upper limit of the number of hidden layer neurons to M max , the value range of the center vector and width parameter to [0, 1], and the value range of the connection weight and bias to [-1, 1]; 生成初始种群,随机生成S个个体,每个个体由隐层神经元的个数M,中心向量C=[c1,c2,...,cM],宽度参数∑=[σ1,σ2,…,σM],连接权重W=[w11,w12,…,wM3]和偏置B=[b1,b2,b3]组成,表示为:Generate an initial population and randomly generate S individuals. Each individual consists of the number of hidden layer neurons M, the center vector C = [c 1 , c 2 , ..., c M ], the width parameter ∑ = [σ 1 , σ 2 , ..., σ M ], the connection weight W = [w 11 , w 12 , ..., w M3 ] and the bias B = [b 1 , b 2 , b 3 ], which can be expressed as: Xs=[M,C,∑,W,B],s=1,2,…,SX s = [M, C, ∑, W, B], s = 1, 2,..., S 式中,Xs为第s个个体;M为一个整数,满足1≤M≤Mmax;C、∑、W和B都是实数向量,满足0≤C,∑≤1、-1≤W,B≤1;Where Xs is the sth individual; M is an integer, satisfying 1≤M≤Mmax ; C, ∑, W and B are all real vectors, satisfying 0≤C, ∑≤1, -1≤W, B≤1; 计算适应度,对于每个个体Xs,根据其参数构建RBF神经网络,使用训练集计算其损失函数Losss,然后将其转化为适应度函数Fs,表示为:Calculate the fitness. For each individual X s , construct the RBF neural network according to its parameters, use the training set to calculate its loss function Loss s , and then convert it into the fitness function F s , expressed as: 式中,l为一个很小的正数,用于避免分母为零的情况;In the formula, l is a very small positive number, which is used to avoid the situation where the denominator is zero; 选择操作,使用轮盘赌法选择S个个体进入下一代,同时保留E个精英个体,即适应度最高的E个个体,不参与交叉和变异操作,直接复制到下一代;Selection operation: Use the roulette method to select S individuals to enter the next generation, and retain E elite individuals, that is, the E individuals with the highest fitness, which do not participate in crossover and mutation operations and are directly copied to the next generation; 交叉操作,使用自适应交叉概率,表示为:The crossover operation uses an adaptive crossover probability, expressed as: 式中,Pc,s为第s个个体交叉概率;g为当前代数;Gmax为最大进化代数;Where, P c,s is the crossover probability of the sth individual; g is the current generation; G max is the maximum evolutionary generation; 对于每对相邻的个体,根据其交叉概率进行交叉操作,即交换部分或全部的参数,生成两个新的个体,具体包括:For each pair of adjacent individuals, a crossover operation is performed according to their crossover probability, that is, some or all of the parameters are exchanged to generate two new individuals, including: 隐层神经元个数M的交叉,随机选择一个交叉点p,然后交换两个个体的M的前p位二进制编码,生成两个新的M值,表示为:The number of hidden neurons M is crossed, a crossover point p is randomly selected, and then the first p bits of binary codes of M of the two individuals are exchanged to generate two new M values, which are expressed as: M′1=M1[1:p]+M2[p+1:Mmax]M′ 1 =M 1 [1: p] + M 2 [p + 1: M max ] M′2=M2[1:p]+M1[p+1:Mmax]M′ 2 =M 2 [1: p] + M 1 [p + 1: M max ] 式中,M1和M2为两个个体的原始M值;M′1和M′2为两个个体的新的M值;M[l:ζ]表示M的第l位到第ζ位的二进制编码;满足1≤p≤Mmax-1;Wherein, M1 and M2 are the original M values of the two individuals; M′1 and M′2 are the new M values of the two individuals; M[l:ζ] represents the binary code from the lth bit to the ζth bit of M; and 1≤p≤M max -1 is satisfied; 中心向量C的交叉,随机选择一个交叉点a,然后交换两个个体的C的前a个元素,生成两个新的C向量,表示为:The intersection of the center vector C randomly selects an intersection point a, and then exchanges the first a elements of C of the two individuals to generate two new C vectors, expressed as: 式中,C′1和C′2为两个个体的新的C向量;c表示第s个个体的第β个中心向量元素;a满足1≤a≤Mmin;Mmin满足Mmin=min(M′1,M′2);Wherein, C′ 1 and C′ 2 are the new C vectors of two individuals; c represents the βth central vector element of the sth individual; a satisfies 1≤a≤M min ; M min satisfies M min =min(M′ 1 ,M′ 2 ); 宽度参数∑的交叉,随机选择一个交叉点r,然后交换两个个体的∑的前r个元素,生成两个新的∑向量,表示为:Crossover of width parameter ∑ randomly selects a crossover point r, and then exchanges the first r elements of ∑ of the two individuals to generate two new ∑ vectors, expressed as: 式中,∑′1和∑′2为两个个体的新的∑向量;σ表示第s个个体的第γ个中心向量元素;r满足1≤r≤MminWhere ∑′ 1 and ∑′ 2 are the new ∑ vectors of the two individuals; σ represents the γth central vector element of the sth individual; r satisfies 1≤r≤M min ; 连接权重W的交叉,随机选择一个交叉点f,然后交换两个个体的W的前f个元素,生成两个新的W向量,表示为:Connect the cross of weight W, randomly select a crossover point f, and then exchange the first f elements of W of the two individuals to generate two new W vectors, expressed as: 式中,W′1和W′2为两个个体的新的W向量;wsIJ为表示第s个个体的第I个隐层神经元到第J个输出层神经元的连接权重;f满足1≤f≤3MminWhere W′1 and W′2 are the new W vectors of the two individuals; wsIJ is the connection weight from the Ith hidden layer neuron to the Jth output layer neuron of the sth individual; f satisfies 1≤f≤3Mmin ; 偏置B的交叉,随机选择一个交叉点h,然后交换两个个体的B的前h个元素,生成两个新的B向量,表示为:Biased crossover of B randomly selects a crossover point h, and then exchanges the first h elements of B of the two individuals to generate two new B vectors, expressed as: B′1=[b11,b12,…,b1h,b2h+1,b2h+2,…,b23]B′ 1 = [b 11 , b 12 ,…, b 1h , b 2h+1 , b 2h+2 ,…, b 23 ] B′2=[b21,b22,…,b2h,b1h+1,b1h+2,…,b13]B′ 2 = [b 21 , b 22 ,…, b 2h , b 1h+1 , b 1h+2 ,…, b 13 ] 式中,B′1和B′2为两个个体的新的B向量;bsJ为表示第s个个体的第J个输出层神经元的偏置;h满足1≤h≤3;Where B′ 1 and B′ 2 are the new B vectors of the two individuals; b sJ is the bias of the Jth output layer neuron representing the sth individual; h satisfies 1≤h≤3; 变异操作,使用自适应变异概率,表示为:The mutation operation uses adaptive mutation probability, expressed as: 式中,Pm,s为第s个个体变异概率;g为当前代数;Gmax为最大进化代数;Where, P m,s is the mutation probability of the sth individual; g is the current generation; G max is the maximum evolutionary generation; 对于每个个体,根据其变异概率进行变异操作,即对部分或全部的参数进行微小的扰动,生成一个新的个体,具体包括:For each individual, a mutation operation is performed according to its mutation probability, that is, a small perturbation is made to some or all of the parameters to generate a new individual, including: 隐层神经元个数M的变异,随机选择一个交叉点o,然后对第o位的二进制编码进行翻转,生成新的M值,表示为:The variation of the number of hidden layer neurons M randomly selects a crossover point o, and then flips the binary code of the oth bit to generate a new M value, which is expressed as: M′=M[1:o-1]+M[o]+M[o+1:Mmax]M′=M[1:o-1]+M[o]+M[o+1:M max ] 式中,M′为新的M值;M[o]表示M的第o位的二进制编码的反码,即0变为1,1变为0;满足1≤o≤MmaxWhere M′ is the new value of M; M[o] represents the inverse of the binary code of the oth bit of M, that is, 0 becomes 1, and 1 becomes 0; 1≤o≤M max is satisfied; 中心向量C的变异,随机选择一个交叉点u,然后对第u个元素加上一个服从正态分布的随机数,生成一个新的C向量,表示为:The mutation of the center vector C randomly selects an intersection point u, and then adds a random number that follows a normal distribution to the u-th element to generate a new C vector, which is expressed as: C′=[c1,c2,…,cu+δ,…,cM,]C′=[c 1 , c 2 ,…, c u +δ,…, c M ,] 式中,C′为新的C向量;δ为一个服从正态分布N(0,σc)的随机数;σc为一个较小的标准差,用于控制变异的幅度;满足1≤u≤M′;Where C′ is the new C vector; δ is a random number that obeys the normal distribution N(0,σ c ); σ c is a small standard deviation used to control the amplitude of variation; and 1≤u≤M′ is satisfied; 宽度参数∑的变异,随机选择一个交叉点然后对第个元素加上一个服从正态分布的随机数,生成一个新的∑向量,表示为:Variation of the width parameter ∑, randomly selecting a crossover point Then for the elements plus a random number that follows a normal distribution to generate a new ∑ vector, expressed as: 式中,∑′为新的∑向量;δ为一个服从正态分布N(0,σσ)的随机数;σσ为一个较小的标准差,用于控制变异的幅度;满足 Where ∑′ is the new ∑ vector; δ is a random number that obeys the normal distribution N(0, σ σ ); σ σ is a small standard deviation used to control the amplitude of variation; satisfy 连接权重W的变异,随机选择一个变异点v,然后对第v个元素加上一个服从正态分布的随机数,生成一个新的W向量,表示为:To mutate the connection weight W, randomly select a mutation point v, and then add a random number that follows a normal distribution to the vth element to generate a new W vector, which is expressed as: W′=[w11,w12,…,wv+δ,…,w3M′]W′=[w 11 , w 12 ,…, w v +δ,…, w 3M′ ] 式中,W′为新的W向量;δ为一个服从正态分布N(0,σw)的随机数,σw为一个较小的标准差,用于控制变异的幅度;满足1≤v≤3M′;Where W′ is the new W vector; δ is a random number that obeys the normal distribution N(0,σ w ), and σ w is a small standard deviation used to control the amplitude of variation; satisfying 1≤v≤3M′; 偏置B的变异,随机选择一个变异点z,然后对第z个元素加上一个服从正态分布的随机数,生成一个新的B向量,表示为:The variation of bias B randomly selects a variation point z, and then adds a random number that follows a normal distribution to the zth element to generate a new B vector, which is expressed as: B′=[b1,b2,…,bz+δ,…,b3]B′=[b 1 , b 2 ,…, b z + δ,…, b 3 ] 式中,B′为新的B向量;δ为一个服从正态分布N(0,σb)的随机数,σb为一个较小的标准差,用于控制变异的幅度;满足1≤z≤3;Where B′ is the new B vector; δ is a random number that obeys the normal distribution N(0,σ b ), and σ b is a small standard deviation used to control the amplitude of variation; satisfying 1≤z≤3; 终止条件,若达到最大进化代数Gmax,或者种群的适应度变化小于设定阈值,停止进化,输出最优个体的参数和适应度,以及对应的径向基神经网络模型;否则,令g=g+1,返回计算适应度,继续进化;Termination condition: if the maximum number of evolutionary generations G max is reached, or the fitness change of the population is less than the set threshold, the evolution is stopped, and the parameters and fitness of the optimal individual, as well as the corresponding radial basis neural network model, are output; otherwise, set g = g + 1, return to calculate the fitness, and continue the evolution; 步骤S5:使用测试集评估径向基神经网络模型的预测性能,计算预测值和实际值之间的误差;判断所述误差是否符合误差范围,若符合误差范围,则输出径向基神经网络模型;若不符合所述误差范围,则重新训练径向基神经网络模型。Step S5: Use the test set to evaluate the prediction performance of the radial basis neural network model, calculate the error between the predicted value and the actual value; determine whether the error meets the error range, if it meets the error range, output the radial basis neural network model; if it does not meet the error range, retrain the radial basis neural network model. 2.根据权利要求1所述的一种磨料射流冷切割质量预测方法,其特征在于,所述将所述实验数据进行归一化处理,具体包括:2. The abrasive jet cold cutting quality prediction method according to claim 1, characterized in that the normalization of the experimental data specifically comprises: 式中,N为实验数据的数量;X为切割工艺参数矩阵;Y为切割加工质量指标矩阵;Where N is the number of experimental data; X is the cutting process parameter matrix; Y is the cutting process quality index matrix; 数据归一化表示为:Data normalization is expressed as: 式中,xij和yij分别为第i个样本的切割工艺参数和切割加工质量指标;xij′和yij′分别为归一化后第i个样本的切割工艺参数和切割加工质量指标;xj和yj分别为第j列的切割工艺参数和切割加工质量指标;min(xj)和max(xj)分别为第j列的切割工艺参数的最小值和最大值;min(yj)和max(yj)分别表示第j列的切割加工质量指标的最小值和最大值。In the formula, xij and yij are the cutting process parameters and cutting process quality indicators of the i-th sample respectively; xij and yij ′ are the cutting process parameters and cutting process quality indicators of the i-th sample after normalization respectively; xj and yj are the cutting process parameters and cutting process quality indicators of the j-th column respectively; min( xj ) and max( xj ) are the minimum and maximum values of the cutting process parameters of the j-th column respectively; min( yj ) and max( yj ) represent the minimum and maximum values of the cutting process quality indicators of the j-th column respectively. 3.根据权利要求1所述的一种磨料射流冷切割质量预测方法,其特征在于,所述构建径向基神经网络模型,具体包括:3. The method for predicting the quality of abrasive jet cold cutting according to claim 1, wherein the step of constructing a radial basis function neural network model comprises: 所述径向基神经网络模型的输入层为切割工艺参数;输出层为切割加工质量指标;隐层的神经元个数为M;The input layer of the radial basis neural network model is the cutting process parameters; the output layer is the cutting process quality index; the number of neurons in the hidden layer is M; 隐层的激活函数表示为:The activation function of the hidden layer is expressed as: 式中,x为输入层的数据,即归一化后的切割工艺参数;cI为第I个隐层神经元的中心向量;σI为第I个隐层神经元的宽度参数;φ为径向基函数,即高斯函数;||·||为欧几里得范数;Where x is the data of the input layer, i.e., the normalized cutting process parameters; c I is the center vector of the Ith hidden layer neuron; σ I is the width parameter of the Ith hidden layer neuron; φ is the radial basis function, i.e., the Gaussian function; ||·|| is the Euclidean norm; 输出层的输出函数表示为:The output function of the output layer is expressed as: 式中,yJ(x)为第J个输出层神经元的输出值,即归一化后的切割加工质量指标;wIJ为第I个隐层神经元到第J个输出层神经元的连接权重;bJ为第J个输出层神经元的偏置;Where y J (x) is the output value of the J-th output layer neuron, that is, the normalized cutting quality index; w IJ is the connection weight from the I-th hidden layer neuron to the J-th output layer neuron; b J is the bias of the J-th output layer neuron; 损失函数表示为:The loss function is expressed as: 式中,ykJ为第k个样本的第J个切割加工质量指标的实际值;为第k个样本的第J个切割加工质量指标的预测值;为训练集样本数,即部分实验数据的数量。Where y kJ is the actual value of the Jth cutting quality index of the kth sample; is the predicted value of the Jth cutting quality index of the kth sample; is the number of samples in the training set, that is, the number of partial experimental data. 4.一种磨料射流冷切割质量预测系统,其特征在于,所述系统包括:4. An abrasive jet cold cutting quality prediction system, characterized in that the system comprises: 实验数据收集模块,用于收集切割实验数据;所述实验数据包括切割工艺参数和切割加工质量指标,具体包括:The experimental data collection module is used to collect cutting experimental data; the experimental data includes cutting process parameters and cutting processing quality indicators, specifically including: 磨料射流的截面积A表示为:The cross-sectional area A of the abrasive jet is expressed as: 式中,d为喷嘴直径;α为射流的半顶角;L为靶距;Where, d is the nozzle diameter; α is the semi-apex angle of the jet; L is the target distance; 磨料射流的速度V表示为:The velocity V of the abrasive jet is expressed as: 式中,Q为磨料射流流量;V为射流速度;Where, Q is the abrasive jet flow rate; V is the jet velocity; 射流的动能Ek表示为:The kinetic energy E k of the jet is expressed as: 式中,Ek为磨料射流动能;ρ为磨料射流密度;Where, Ek is the kinetic energy of the abrasive jet; ρ is the density of the abrasive jet; 射流的压力P表示为:The pressure P of the jet is expressed as: 式中,P为磨料射流压力;Where, P is the abrasive jet pressure; 射流的冲击力F表示为:The impact force F of the jet is expressed as: 式中,F为磨料射流冲击力;θ为轴线与被切割材料表面夹角;Where F is the impact force of the abrasive jet; θ is the angle between the axis and the surface of the material being cut; 损耗程度LR表示为:The loss level LR is expressed as: 式中,LR为损耗程度;ms为切割后的样品质量;ma为切割过程中从喷嘴出来消耗的磨料质量;mn为切割过程中磨损的喷嘴质量;ρa为磨料密度;ρn为喷嘴密度;t为切割时间;∈为喷嘴磨损系数;m0为切割前的材料质量,m0=ms+ma+mnWherein, LR is the loss degree; ms is the mass of the sample after cutting; ma is the mass of the abrasive consumed from the nozzle during the cutting process; mn is the mass of the nozzle worn during the cutting process; ρa is the abrasive density; ρn is the nozzle density; t is the cutting time; ∈ is the nozzle wear coefficient; m0 is the material mass before cutting, m0 = ms +m a + mn ; 能量消耗EC表示为:Energy consumption EC is expressed as: 式中,EC为能量消耗;We为切割过程中消耗的电能;Ww为切割过程中消耗的水能,ηe为电能转换效率;ρw为水的密度;cw为水的比热容;ΔT为水的温升;Where EC is energy consumption; We is the electrical energy consumed during the cutting process; Ww is the water energy consumed during the cutting process; ηe is the electrical energy conversion efficiency; ρw is the density of water; cw is the specific heat capacity of water; ΔT is the temperature rise of water; 切割精度CP表示为:Cutting accuracy CP is expressed as: 式中,CP为切割精度,指切割后的切口与预期切口的相似度;ΔL为切割后的切口长度与预期长度的偏差;ΔW为切割后的切口宽度与预期宽度的偏差;Ra为切割后的切口表面粗糙度;D为磨料粒径;Wherein, CP is the cutting accuracy, which refers to the similarity between the cut after cutting and the expected cut; ΔL is the deviation between the cut length after cutting and the expected length; ΔW is the deviation between the cut width after cutting and the expected width; Ra is the surface roughness of the cut after cutting; D is the abrasive particle size; 归一化处理划分模块,用于将所述实验数据进行归一化处理,划分为训练集和测试集;A normalization processing and division module is used to normalize the experimental data and divide it into a training set and a test set; 径向基网络构建模块,用于构建径向基神经网络模型;所述径向基神经网络模型的输入层为切割工艺参数、输出层为切割加工质量指标、激活函数为高斯函数和损失函数为均方误差;A radial basis network construction module is used to construct a radial basis neural network model; the input layer of the radial basis neural network model is the cutting process parameter, the output layer is the cutting process quality index, the activation function is the Gaussian function and the loss function is the mean square error; 网络参数优化模块,用于使用遗传算法优化径向基神经网络模型参数;所述参数包括隐层神经元个数、中心向量、宽度、连接权重和偏置,具体包括:The network parameter optimization module is used to optimize the parameters of the radial basis neural network model using a genetic algorithm; the parameters include the number of hidden layer neurons, center vector, width, connection weight and bias, specifically including: 参数初始化子模块,用于初始化参数,设定种群规模为S,最大进化代数为Gmax,初始交叉概率为Pc,初始变异概率为Pm,精英个体数为E,当前代数为g=0,隐层神经元个数的上限为Mmax,中心向量和宽度参数的取值范围为[0,1],连接权重和偏置的取值范围为[-1,1];The parameter initialization submodule is used to initialize parameters, setting the population size to S, the maximum evolutionary generation to G max , the initial crossover probability to P c , the initial mutation probability to P m , the number of elite individuals to E , the current generation to g = 0, the upper limit of the number of hidden layer neurons to M max , the value range of the center vector and width parameter to [0, 1], and the value range of the connection weight and bias to [-1, 1]; 初始种群生成子模块,用于生成初始种群,随机生成S个个体,每个个体由隐层神经元的个数M,中心向量C=[c1,c2,...,cM],宽度参数∑=[σ1,σ2,…,σM],连接权重W=[w11,w12,…,wM3]和偏置B=[b1,b2,b3]组成,表示为:The initial population generation submodule is used to generate the initial population and randomly generate S individuals. Each individual consists of the number of hidden layer neurons M, the center vector C = [c 1 , c 2 , ..., c M ], the width parameter ∑ = [σ 1 , σ 2 , ..., σ M ], the connection weight W = [w 11 , w 12 , ..., w M3 ] and the bias B = [b 1 , b 2 , b 3 ], which can be expressed as: Xs=[M,C,∑,W,B],s=1,2,…,SX s = [M, C, ∑, W, B], s = 1, 2,..., S 式中,Xs为第s个个体;M为一个整数,满足1≤M≤Mmax;C、∑、W和B都是实数向量,满足0≤C,∑≤1、-1≤W,B≤1;Where Xs is the sth individual; M is an integer, satisfying 1≤M≤Mmax ; C, ∑, W and B are all real vectors, satisfying 0≤C, ∑≤1, -1≤W, B≤1; 适应度计算子模块,用于计算适应度,对于每个个体Xs,根据其参数构建RBF神经网络,使用训练集计算其损失函数Losss,然后将其转化为适应度函数Fs,表示为:The fitness calculation submodule is used to calculate the fitness. For each individual X s , an RBF neural network is constructed according to its parameters, and its loss function Loss s is calculated using the training set, and then converted into a fitness function F s , which is expressed as: 式中,l为一个很小的正数,用于避免分母为零的情况;In the formula, l is a very small positive number, which is used to avoid the situation where the denominator is zero; 选择子模块,用于进行选择操作,使用轮盘赌法选择S个个体进入下一代,同时保留E个精英个体,即适应度最高的E个个体,不参与交叉和变异操作,直接复制到下一代;The selection submodule is used to perform selection operations. It uses the roulette method to select S individuals to enter the next generation, while retaining E elite individuals, that is, the E individuals with the highest fitness, which do not participate in crossover and mutation operations and are directly copied to the next generation; 交叉子模块,用于进行交叉操作,使用自适应交叉概率,表示为:The crossover submodule is used to perform a crossover operation using an adaptive crossover probability, expressed as: 式中,Pc,s为第s个个体交叉概率;g为当前代数;Gmax为最大进化代数;Where, P c,s is the crossover probability of the sth individual; g is the current generation; G max is the maximum evolutionary generation; 对于每对相邻的个体,根据其交叉概率进行交叉操作,即交换部分或全部的参数,生成两个新的个体,具体包括:For each pair of adjacent individuals, a crossover operation is performed according to their crossover probability, that is, some or all of the parameters are exchanged to generate two new individuals, including: 隐层神经元个数M的交叉,随机选择一个交叉点p,然后交换两个个体的M的前p位二进制编码,生成两个新的M值,表示为:The number of hidden neurons M is crossed, a crossover point p is randomly selected, and then the first p bits of binary codes of M of the two individuals are exchanged to generate two new M values, which are expressed as: M′1=M1[1:p]+M2[p+1:Mmax]M′ 1 =M 1 [1: p] + M 2 [p + 1: M max ] M′2=M2[1:p]+M1[p+1:Mmax]M′ 2 =M 2 [1: p] + M 1 [p + 1: M max ] 式中,M1和M2为两个个体的原始M值;M′1和M′2为两个个体的新的M值;M[l:ζ]表示M的第l位到第ζ位的二进制编码;满足1≤p≤Mmax-1;Wherein, M1 and M2 are the original M values of the two individuals; M′1 and M′2 are the new M values of the two individuals; M[l:ζ] represents the binary code from the lth bit to the ζth bit of M; and 1≤p≤M max -1 is satisfied; 中心向量C的交叉,随机选择一个交叉点a,然后交换两个个体的C的前a个元素,生成两个新的C向量,表示为:The intersection of the center vector C randomly selects an intersection point a, and then exchanges the first a elements of C of the two individuals to generate two new C vectors, expressed as: 式中,C′1和C′2为两个个体的新的C向量;c表示第s个个体的第β个中心向量元素;a满足1≤a≤Mmin;Mmin满足Mmin=min(M′1,M′2);Wherein, C′ 1 and C′ 2 are the new C vectors of two individuals; c represents the βth central vector element of the sth individual; a satisfies 1≤a≤M min ; M min satisfies M min =min(M′ 1 ,M′ 2 ); 宽度参数∑的交叉,随机选择一个交叉点r,然后交换两个个体的∑的前r个元素,生成两个新的∑向量,表示为:Crossover of width parameter ∑ randomly selects a crossover point r, and then exchanges the first r elements of ∑ of the two individuals to generate two new ∑ vectors, expressed as: 式中,∑′1和∑′2为两个个体的新的∑向量;σ表示第s个个体的第γ个中心向量元素;r满足1≤r≤MminWhere ∑′ 1 and ∑′ 2 are the new ∑ vectors of the two individuals; σ represents the γth central vector element of the sth individual; r satisfies 1≤r≤M min ; 连接权重W的交叉,随机选择一个交叉点f,然后交换两个个体的W的前f个元素,生成两个新的W向量,表示为:Connect the cross of weight W, randomly select a crossover point f, and then exchange the first f elements of W of the two individuals to generate two new W vectors, expressed as: 式中,W′1和W′2为两个个体的新的W向量;wsIJ为表示第s个个体的第I个隐层神经元到第J个输出层神经元的连接权重;f满足1≤f≤3MminWhere W′1 and W′2 are the new W vectors of the two individuals; wsIJ is the connection weight from the Ith hidden layer neuron to the Jth output layer neuron of the sth individual; f satisfies 1≤f≤3Mmin ; 偏置B的交叉,随机选择一个交叉点h,然后交换两个个体的B的前h个元素,生成两个新的B向量,表示为:Biased crossover of B randomly selects a crossover point h, and then exchanges the first h elements of B of the two individuals to generate two new B vectors, expressed as: B′1=[b11,b12,…,b1h,b2h+1,b2h+2,…,b23]B′ 1 = [b 11 , b 12 ,…, b 1h , b 2h+1 , b 2h+2 ,…, b 23 ] B′2=[b21,b22,…,b2h,b1h+1,b1h+2,…,b13]B′ 2 = [b 21 , b 22 ,…, b 2h , b 1h+1 , b 1h+2 ,…, b 13 ] 式中,B′1和B′2为两个个体的新的B向量;bsJ为表示第s个个体的第J个输出层神经元的偏置;h满足1≤h≤3;Where B′1 and B′2 are the new B vectors of the two individuals; bsJ is the bias of the Jth output layer neuron representing the sth individual; h satisfies 1≤h≤3; 变异子模块,用于进行变异操作,使用自适应变异概率,表示为:The mutation submodule is used to perform mutation operations and uses an adaptive mutation probability, expressed as: 式中,Pm,s为第s个个体变异概率;g为当前代数;Gmax为最大进化代数;Where, P m,s is the mutation probability of the sth individual; g is the current generation; G max is the maximum evolutionary generation; 对于每个个体,根据其变异概率进行变异操作,即对部分或全部的参数进行微小的扰动,生成一个新的个体,具体包括:For each individual, a mutation operation is performed according to its mutation probability, that is, a small perturbation is made to some or all of the parameters to generate a new individual, including: 隐层神经元个数M的变异,随机选择一个交叉点o,然后对第o位的二进制编码进行翻转,生成新的M值,表示为:The variation of the number of hidden layer neurons M randomly selects a crossover point o, and then flips the binary code of the oth bit to generate a new M value, which is expressed as: M′=M[1:o-1]+M[o]+M[o+1:Mmax]M′=M[1:o-1]+M[o]+M[o+1:M max ] 式中,M′为新的M值;M[o]表示M的第o位的二进制编码的反码,即0变为1,1变为0;满足1≤o≤MmaxWhere M′ is the new value of M; M[o] represents the inverse of the binary code of the oth bit of M, that is, 0 becomes 1, and 1 becomes 0; 1≤o≤M max is satisfied; 中心向量C的变异,随机选择一个交叉点u,然后对第u个元素加上一个服从正态分布的随机数,生成一个新的C向量,表示为:The mutation of the center vector C randomly selects an intersection point u, and then adds a random number that follows a normal distribution to the u-th element to generate a new C vector, which is expressed as: C′=[c1,c2,…,cu+δ,…,cM′]C′=[c 1 , c 2 ,…, c u + δ,…, c M′ ] 式中,C′为新的C向量;δ为一个服从正态分布N(0,σc)的随机数;σc为一个较小的标准差,用于控制变异的幅度;满足1≤u≤M′;Where C′ is the new C vector; δ is a random number that obeys the normal distribution N(0,σ c ); σ c is a small standard deviation used to control the amplitude of variation; and 1≤u≤M′ is satisfied; 宽度参数∑的变异,随机选择一个交叉点然后对第个元素加上一个服从正态分布的随机数,生成一个新的∑向量,表示为:Variation of the width parameter ∑, randomly selecting a crossover point Then for the elements plus a random number that follows a normal distribution to generate a new ∑ vector, expressed as: 式中,∑′为新的∑向量;δ为一个服从正态分布N(0,σσ)的随机数;σσ为一个较小的标准差,用于控制变异的幅度;满足 Where ∑′ is the new ∑ vector; δ is a random number that obeys the normal distribution N(0, σ σ ); σ σ is a small standard deviation used to control the amplitude of variation; satisfy 连接权重W的变异,随机选择一个变异点v,然后对第v个元素加上一个服从正态分布的随机数,生成一个新的W向量,表示为:To mutate the connection weight W, randomly select a mutation point v, and then add a random number that follows a normal distribution to the vth element to generate a new W vector, which is expressed as: W′=[w11,w12,…,wv+δ,…,w3M′]W′=[w 11 , w 12 ,…, w v +δ,…, w 3M′ ] 式中,W′为新的W向量;δ为一个服从正态分布N(0,σw)的随机数,σw为一个较小的标准差,用于控制变异的幅度;满足1≤v≤3M′;Where W′ is the new W vector; δ is a random number that obeys the normal distribution N(0,σ w ), and σ w is a small standard deviation used to control the amplitude of variation; satisfying 1≤v≤3M′; 偏置B的变异,随机选择一个变异点z,然后对第z个元素加上一个服从正态分布的随机数,生成一个新的B向量,表示为:The variation of bias B randomly selects a variation point z, and then adds a random number that follows a normal distribution to the zth element to generate a new B vector, which is expressed as: B′=[b1,b2,…,bz+δ,…,b3]B′=[b 1 , b 2 ,…, b z + δ,…, b 3 ] 式中,B′为新的B向量;δ为一个服从正态分布N(0,σb)的随机数,σb为一个较小的标准差,用于控制变异的幅度;满足1≤z≤3;Where B′ is the new B vector; δ is a random number that obeys the normal distribution N(0,σ b ), and σ b is a small standard deviation used to control the amplitude of variation; satisfying 1≤z≤3; 条件终止子模块,用于终止条件,若达到最大进化代数Gmax,或者种群的适应度变化小于设定阈值,停止进化,输出最优个体的参数和适应度,以及对应的径向基神经网络模型;否则,令g=g+1,返回计算适应度,继续进化;The conditional termination submodule is used for termination conditions. If the maximum number of evolutionary generations G max is reached, or the fitness change of the population is less than the set threshold, the evolution is stopped and the parameters and fitness of the optimal individual, as well as the corresponding radial basis neural network model, are output; otherwise, g=g+1 is set, the calculated fitness is returned, and the evolution continues; 模型性能评估模块,用于使用测试集评估径向基神经网络模型的预测性能,计算预测值和实际值之间的误差;判断所述误差是否符合误差范围,若符合误差范围,则输出径向基神经网络模型;若不符合所述误差范围,则重新训练径向基神经网络模型。The model performance evaluation module is used to evaluate the prediction performance of the radial basis neural network model using a test set, calculate the error between the predicted value and the actual value; determine whether the error meets the error range, if it meets the error range, output the radial basis neural network model; if it does not meet the error range, retrain the radial basis neural network model. 5.根据权利要求4所述的一种磨料射流冷切割质量预测系统,其特征在于,所述归一化处理划分模块,具体包括:5. The abrasive jet cold cutting quality prediction system according to claim 4, characterized in that the normalization processing division module specifically comprises: 式中,N为实验数据的数量;X为切割工艺参数矩阵;Y为切割加工质量指标矩阵;Where N is the number of experimental data; X is the cutting process parameter matrix; Y is the cutting process quality index matrix; 数据归一化表示为:Data normalization is expressed as: 式中,xij和yij分别为第i个样本的切割工艺参数和切割加工质量指标;xij′和yij′分别为归一化后第i个样本的切割工艺参数和切割加工质量指标;xj和yj分别为第j列的切割工艺参数和切割加工质量指标;min(xj)和max(xj)分别为第j列的切割工艺参数的最小值和最大值;min(yj)和max(yj)分别表示第j列的切割加工质量指标的最小值和最大值。In the formula, xij and yij are the cutting process parameters and cutting process quality indicators of the i-th sample respectively; xij and yij ′ are the cutting process parameters and cutting process quality indicators of the i-th sample after normalization respectively; xj and yj are the cutting process parameters and cutting process quality indicators of the j-th column respectively; min( xj ) and max( xj ) are the minimum and maximum values of the cutting process parameters of the j-th column respectively; min( yj ) and max( yj ) represent the minimum and maximum values of the cutting process quality indicators of the j-th column respectively. 6.根据权利要求4所述的一种磨料射流冷切割质量预测系统,其特征在于,所述径向基网络构建模块,具体包括:6. The abrasive jet cold cutting quality prediction system according to claim 4, characterized in that the radial basis network construction module specifically comprises: 所述径向基神经网络模型的输入层为切割工艺参数;输出层为切割加工质量指标;隐层的神经元个数为M;The input layer of the radial basis neural network model is the cutting process parameters; the output layer is the cutting process quality index; the number of neurons in the hidden layer is M; 隐层的激活函数表示为:The activation function of the hidden layer is expressed as: 式中,x为输入层的数据,即归一化后的切割工艺参数;cI为第I个隐层神经元的中心向量;σI为第I个隐层神经元的宽度参数;φ为径向基函数,即高斯函数;||·||为欧几里得范数;Where x is the data of the input layer, i.e., the normalized cutting process parameters; c I is the center vector of the Ith hidden layer neuron; σ I is the width parameter of the Ith hidden layer neuron; φ is the radial basis function, i.e., the Gaussian function; ||·|| is the Euclidean norm; 输出层的输出函数表示为:The output function of the output layer is expressed as: 式中,yJ(x)为第J个输出层神经元的输出值,即归一化后的切割加工质量指标;wIJ为第I个隐层神经元到第J个输出层神经元的连接权重;bJ为第J个输出层神经元的偏置;Where y J (x) is the output value of the J-th output layer neuron, that is, the normalized cutting quality index; w IJ is the connection weight from the I-th hidden layer neuron to the J-th output layer neuron; b J is the bias of the J-th output layer neuron; 损失函数表示为:The loss function is expressed as: 式中,ykJ为第k个样本的第J个切割加工质量指标的实际值;为第k个样本的第J个切割加工质量指标的预测值;为训练集样本数,即部分实验数据的数量。Where y kJ is the actual value of the Jth cutting quality index of the kth sample; is the predicted value of the Jth cutting quality index of the kth sample; is the number of samples in the training set, that is, the number of partial experimental data.
CN202410209518.XA 2024-02-26 2024-02-26 A method and system for predicting quality of abrasive jet cold cutting Active CN118246527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410209518.XA CN118246527B (en) 2024-02-26 2024-02-26 A method and system for predicting quality of abrasive jet cold cutting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410209518.XA CN118246527B (en) 2024-02-26 2024-02-26 A method and system for predicting quality of abrasive jet cold cutting

Publications (2)

Publication Number Publication Date
CN118246527A CN118246527A (en) 2024-06-25
CN118246527B true CN118246527B (en) 2024-11-15

Family

ID=91551874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410209518.XA Active CN118246527B (en) 2024-02-26 2024-02-26 A method and system for predicting quality of abrasive jet cold cutting

Country Status (1)

Country Link
CN (1) CN118246527B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115936211A (en) * 2022-11-30 2023-04-07 大连海事大学 Soil engineering parameter prediction method capable of considering soil framework damage influence

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CZ305514B6 (en) * 2010-07-23 2015-11-11 Ăšstav geoniky AV ÄŚR, v. v. i. Method for the design of a technology for the abrasive waterjet cutting of materials Kawj
TWI673128B (en) * 2014-01-27 2019-10-01 美商康寧公司 Glass article and method of edge chamfering and/or beveling by mechanically processing laser cut glass
CN104978612A (en) * 2015-01-27 2015-10-14 厦门大学 Distributed big data system risk predicating method based on AHP-RBF
CN115122242A (en) * 2022-07-18 2022-09-30 沈阳工业大学 A rail repair technology based on high pressure abrasive water jet
CN117245568A (en) * 2023-10-16 2023-12-19 吉林工程技术师范学院 A metal powder recycling structure for an abrasive jet cold cutting robot

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115936211A (en) * 2022-11-30 2023-04-07 大连海事大学 Soil engineering parameter prediction method capable of considering soil framework damage influence

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"基于神经网络的ECG身份识别技术";段水平;《中国优秀硕士学位论文全文数据库信息科技辑》;20170415;I140-36 *
"磨料水射流切割铝合金模型基础研究";周荣;《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》;20210115;B022-1271 *

Also Published As

Publication number Publication date
CN118246527A (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN108694502B (en) Self-adaptive scheduling method for robot manufacturing unit based on XGboost algorithm
CN105911867B (en) Ship thrust distribution method based on NSGA-II algorithm
CN108920812A (en) A kind of machining surface roughness prediction technique
CN103105246A (en) Greenhouse environment forecasting feedback method of back propagation (BP) neural network based on improvement of genetic algorithm
CN115081592B (en) Highway low-visibility prediction method based on genetic algorithm and feedforward neural network
CN103426027B (en) A kind of intelligence of the normal pool level based on genetic algorithm back propagation neural network model method for optimizing
CN111723523B (en) Estuary surplus water level prediction method based on cascade neural network
CN112836876B (en) Power distribution network line load prediction method based on deep learning
CN117291069B (en) LSTM sewage water quality prediction method based on improved DE and attention mechanism
CN117252114B (en) A cable torsion resistance experimental method based on genetic algorithm
CN109829244A (en) The blower optimum design method of algorithm optimization depth network and three generations's genetic algorithm
CN113420508A (en) Unit combination calculation method based on LSTM
CN116451556A (en) Construction method of concrete dam deformation observed quantity statistical model
CN114004008A (en) A Resource Allocation Optimization Method of Aircraft Assembly Line Based on Neural Network and Genetic Algorithm
CN107767035A (en) A kind of electric energy meter detection mixed production line dispatching method based on genetic algorithm
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN110033118A (en) Elastomeric network modeling and the blower multiobjective optimization control method based on genetic algorithm
CN118246527B (en) A method and system for predicting quality of abrasive jet cold cutting
Sarangi et al. Short term load forecasting using artificial neural network: a comparison with genetic algorithm implementation
CN109862625B (en) Short wave direction finding flexible networking method based on deep learning
CN111369072A (en) An Online Prediction Model of Kernel Least Mean Square Time Series Based on Sparsification Method
CN115879361A (en) Intelligent determination method for operation, maintenance, overhaul and debugging cost of primary equipment
CN113591078A (en) Industrial control intrusion detection system and method based on convolutional neural network architecture optimization
CN118195357A (en) A key performance prediction method and intelligent management platform for intelligent manufacturing workshops under uncertain environments
CN108303940B (en) Adaptive control distribution method for over-driven aircraft

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant