Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present disclosure. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are within the scope of the present disclosure, based on the described embodiments of the present disclosure.
Unless defined otherwise, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The terms "first," "second," and the like, as used in this disclosure, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items. The terms "connected" or "connected," and the like, are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", etc. are used merely to indicate relative positional relationships, which may also be changed when the absolute position of the object to be described is changed. In order to keep the following description of the embodiments of the present disclosure clear and concise, the present disclosure omits a detailed description of some known functions and known components.
Conventional floating point numbers typically include three formats, namely, half-precision floating point number (FP 16), single-precision floating point number (FP 32), and double-precision floating point number (FP 64), with exponent and mantissa portions having different numbers of bits.
AI (ARTIFICIAL INTELLIGENCE ) accelerators, etc. have been widely used for deep learning model training. For the common convolution operation in the deep learning model, special optimization is performed in software and hardware design to accelerate computation, for example, various floating point number data formats are developed for optimization in fields of artificial intelligence or deep learning, such as BF16 (brain floating point, bit width is 16 bits), BF24 (brain floating point, bit width is 24 bits), TF32 (Tensor Float 32, bit width is 19 bits), and the like, and these data formats can greatly reduce computation processing, especially the computation resources and power consumption required by matrix multiplication or convolution multiplication operation, and the like. In addition, the processor supports some conventional floating point types, such as half-precision floating point (FP 16, bit wide of 16 bits) or single-precision floating point (FP 32, bit wide of 32 bits), etc.
TABLE 1 data format
Table 1 shows several floating point precision type data formats. As shown in Table 1, for floating point number precision type FP32, its total number of bits is 32 bits, including 1 sign bit, the exponent portion (i.e., the step code) includes 8 bits, and the mantissa portion includes 23 bits. For BF20, the total number of bits is 20 bits, including 1 sign bit, the exponent portion (i.e., the step code) includes 8 bits, and the mantissa portion includes 11 bits. For BF16, the total number of bits is 16 bits, including 1 sign bit, the exponent portion (i.e., the step code) includes 8 bits, and the mantissa portion includes 7 bits. For BF24, the total number of bits is 24 bits, including 1 sign bit, the exponent portion (i.e., the step code) includes 8 bits, and the mantissa portion includes 15 bits. For BF19, the total number of bits is 19 bits, including 1 sign bit, the exponent portion (i.e., the step code) includes 8 bits, and the mantissa portion includes 10 bits.
As shown in table 1, taking FP32 and BF16 as examples, sign bits of FP32 and BF16 are represented by 1 bit, exponent portions are represented by 8 bits, and mantissa portions are represented by 23 and 7 bits, respectively. Thus, the data ranges represented by FP32 and BF16 are the same, but the precision of FP32 is higher.
The high-precision data formats such as FP32 and the like are beneficial to the convergence of the artificial intelligent model, and the low-precision data formats such as BF16 and the like are beneficial to the improvement of the model training or reasoning speed.
For example, some operators use high-precision data formats such as FP32 for reasoning, and some operators use low-precision data formats such as BF16 for reasoning, so that transferring data between operators using different precision data requires precision conversion. For floating point numbers, precision conversion is the rounding or appending of mantissa digits to change the data precision. The number of significant digits of each measurement value involved in the data processing process may be different, and after the number of significant digits of each measurement value is determined, redundant digits behind the measurement value are discarded according to a certain rule, and the process of discarding the redundant digits is called "digit reduction", and the rule followed by the redundant digits is called "digit reduction rule", namely, rounding rule.
For example, while the input data format is a low-precision data format such as BF16, the input data format needs to be calculated using a high-precision data format such as FP32, and the output data format still needs to be a low-precision data format such as BF16, and thus multiple precision conversions are required.
In order to balance the contradiction between the calculation speed and the calculation accuracy, some artificial intelligence chips use intermediate accuracy data formats to store in registers. For example, in a memory space such as HBM (High Bandwidth Memory ), data is stored in a low precision format such as BF16 to be compatible with input/output data, and the BF16 format is expanded to an intermediate precision data format such as BF20 in a register, thereby improving the calculation precision of calculation check BF16 data. For example, in another specific example, data is stored in BF19 format in a storage space such as HBM to be compatible with input/output data, and BF19 format is extended to BF24 in a register, thereby improving the calculation accuracy of calculation collation BF19 data.
Because of the variety of accuracies in the model, it is necessary for the accuracy conversion operator to convert between different accuracies, such as from high accuracy FP32 to low accuracy BF16 following certain rounding rules.
Intermediate precision data formats such as BF20 and BF24 in the register can improve the precision of the calculation process, but introduce a certain error into precision conversion. For example, in some training tasks, such errors may result in completely different results from training tasks using different precision conversion schemes, failing to make a comparison or parameter adjustment.
At least one embodiment of the present disclosure provides a precision conversion method, a data processing method, a processor, an electronic device, and a non-transitory computer readable storage medium. The precision conversion method comprises the following steps: obtaining input parameters, wherein the input parameters are floating point numbers; performing precision conversion on the input parameter at least twice according to the (a+1) th bit in the mantissa part of the input parameter to obtain an output parameter, wherein the mantissa bit number of the output parameter is a, and a is a positive integer; wherein, in each precision conversion, the mantissa number of the parameter before the precision conversion is larger than the mantissa number of the parameter after the precision conversion.
In at least one embodiment, in the case that multiple precision conversion cannot be avoided, the precision conversion method performs multiple precision conversion by combining the a+1 th bit in the mantissa part of the input parameter from software, and can continuously multiplex the original precision conversion operators comprising multiple precision conversion, so that the result of multiple precision conversion is the same as the result of one precision conversion, errors are eliminated at the lowest cost, a new precision conversion operator without errors with the result of one precision conversion is not required to be provided, and research and development cost is reduced.
For example, in the present disclosure, multiple precision conversions are used to convert a high precision input parameter (e.g., FP 32) to a low precision output parameter (e.g., BF 16) by at least two precision conversions, one precision conversion to directly convert a high precision input parameter (e.g., FP 32) to a low precision output parameter (e.g., BF 16). In the precision conversion method provided in at least one embodiment of the present disclosure, the output parameter obtained through multiple precision conversions may be identical to the output parameter obtained through one precision conversion, and the error is eliminated.
Embodiments of the present disclosure will be described in detail below with reference to the attached drawings, but the present disclosure is not limited to these specific embodiments.
Fig. 1 is a schematic flow chart of a precision conversion method provided in at least one embodiment of the present disclosure.
As shown in fig. 1, the calculation method provided in at least one embodiment of the present disclosure includes steps S10 to S20.
For example, in step S10, input parameters are acquired.
For example, the input parameter is a floating point number, for example, the input parameter is a floating point number in a high-precision format, for example, a floating point number with a large mantissa number such as FP 32.
For example, the input parameter is a parameter requiring precision conversion, for example, an input parameter requiring reduction of the mantissa digit by a rounding rule.
For example, in step S20, the input parameter is subjected to at least two precision conversions according to the a+1 th bit in the mantissa portion of the input parameter, to obtain the output parameter.
For example, the mantissa number of the output parameter is a, which is a positive integer.
For example, in each conversion of the precision, the mantissa number of the parameter before the conversion of the precision is larger than the mantissa number of the parameter after the conversion of the precision, that is, the precision of the parameter is lowered in each conversion of the precision.
For example, the at least two precision conversions include a first precision conversion and a second precision conversion. Of course, the disclosure is not limited thereto, and more precision conversions may also be implemented in a similar manner to two precision conversions, and will not be described again here.
For example, step S20 may include: performing first precision conversion on the input parameter according to the a+1th bit in the mantissa part of the input parameter to obtain an intermediate parameter, wherein the mantissa bit number of the intermediate parameter is smaller than the mantissa bit number of the input parameter but larger than a; and performing secondary precision conversion on the intermediate parameter according to the (a+1) th bit in the mantissa part of the input parameter to obtain an output parameter.
For example, in some embodiments, performing a first precision conversion on the input parameter according to bit a+1 in the mantissa portion of the input parameter to obtain an intermediate parameter may include: in response to bit a+1 being a first value: setting the (b+1) th bit in the mantissa part of the input parameter as a first value to obtain a first intermediate parameter, and rounding the first intermediate parameter by combining a first rounding rule to obtain an intermediate parameter; wherein b is the mantissa digit of the intermediate parameter, b is a positive integer and is greater than a, and the first rounding rule is a predefined digital modification rule for converting the precision format of the input parameter into the precision format of the intermediate parameter.
For example, in some embodiments, performing a first precision conversion on the input parameter according to the (a+1) th bit in the mantissa portion of the input parameter to obtain an intermediate parameter may further include: in response to the (a+1) th bit being the second value and the (a+2) th bit through the (c) th bit not being all the first value in the mantissa portion of the input parameter: setting the (b+1) th bit in the mantissa part of the input parameter as a first value, setting at least one of the (a+2) th bit to the (b) th bit in the mantissa part of the input parameter as a second value to obtain a second intermediate parameter, and rounding the second intermediate parameter by combining a first rounding rule to obtain the intermediate parameter; in response to the (a+1) th bit being the second value and the (a+2) th bit through the (c) th bit in the mantissa portion of the input parameter being the first value: rounding the input parameters by combining the first rounding rule to obtain intermediate parameters; wherein b is the mantissa number of the intermediate parameter, c is the mantissa number of the input parameter, b is a positive integer greater than a but less than c, and the first rounding rule is a predefined digital reduction rule for converting the precision format of the input parameter into the precision format of the intermediate parameter.
For example, the first rounding rule indicates that, in response to bit b+1 in the mantissa portion of the input parameter being a first value, bits 1 through b in the mantissa portion of the input parameter are directly taken as the mantissa portion of the intermediate parameter.
In the present disclosure, the mantissa number of an input parameter is, for example, c, where c is a positive integer and is greater than b, the 1 st bit in the mantissa portion refers to the most significant bit, the c-th bit refers to the least significant bit, and the mantissa portions of other parameters are the same.
For example, the first rounding rule specifies that the precision format of the input parameter (mantissa digit c) is rounded to the precision format of the intermediate parameter (mantissa digit b) in various different numerical cases.
For example, the first value may be 0 and the second value may be 1.
For example, the first rounding rule may provide that, when bit b+1 in the mantissa portion of the input parameter is 0, bits b+1 to c in the mantissa portion of the input parameter are discarded, and bits 1 to b are directly taken as the mantissa portion of the intermediate parameter. For example, the first rounding rule may also specify a rounding rule when bit b+1 in the mantissa portion of the input parameter is1, e.g., when bit b+1 in the mantissa portion of the input parameter is1 and bits c in the b+2 positions are all 0, bits 1 to b are taken directly as the mantissa portion of the intermediate parameter.
The first rounding rule may be set and adjusted as desired, which is not particularly limited by the present disclosure. For example, the first rounding rule may also set different rounding operations as desired, which is not particularly limited by the present disclosure. According to different rounding operations, the process of performing precision conversion specifically in combination with the first rounding rule may be adaptively adjusted with reference to the above embodiment, which is not described herein in detail.
For example, in the first precision conversion, the b+1 th bit in the mantissa portion of the input parameter is adjusted according to the value of the a+1 th bit in the mantissa portion of the input parameter, to obtain the intermediate parameter.
For example, when the a+1th bit in the mantissa portion of the input parameter is 0, setting the b+1th bit in the mantissa portion of the input parameter to be 0, to obtain a first intermediate parameter, and then rounding the first intermediate parameter in combination with the first rounding rule, that is, reserving the 1 st bit to the b th bit of the first intermediate parameter as the intermediate parameter.
For example, when the a+1th bit in the mantissa portion of the input parameter is 1, if the a+2th bit to the c nd bit in the mantissa portion of the input parameter are not all 0, setting the b+1th bit in the mantissa portion of the input parameter to be 0, setting at least one of the a+2th bit to the b th bit in the mantissa portion of the input parameter to be 1, obtaining a second intermediate parameter, and then rounding the second intermediate parameter in combination with the first rounding rule, that is, reserving the 1 st bit to the b th bit of the first intermediate parameter as the intermediate parameter.
For example, when the a+1 bit in the mantissa portion of the input parameter is 1, if the a+2 bit to the c bit in the mantissa portion of the input parameter are all 0 at this time, the rounding operation is directly performed on the input parameter in combination with the first rounding rule, for example, since the b+1 bit of the input parameter is 0 (here, b+1 is greater than or equal to a+2), the 1 st bit to the b bit of the input parameter are reserved as intermediate parameters.
The mantissa number of the intermediate parameter is larger than the mantissa number of the output parameter, so that direct discarding of the (b+1) th bit to the lowest bit in the mantissa portion of the output parameter does not generate an error for conversion. The first rounding rule specifies that, when the b+1th bit in the mantissa portion of the input parameter is 0, the 1 st bit to the b th bit in the mantissa portion of the input parameter are directly used as the mantissa portion of the intermediate parameter, that is, the b+1th bit to the c-th bit are discarded, so after the b+1th bit in the mantissa portion of the input parameter is set to 0, the rounding operation can be performed in combination with the first rounding rule, the 1 st bit to the b th bit in the mantissa portion of the first intermediate parameter or the second intermediate parameter are reserved as the mantissa portion of the intermediate parameter, and the exponent of the input parameter is used as the exponent of the intermediate parameter. For example, the rounding operation in the first precision conversion may be performed directly using existing precision conversion operators after adjusting a portion of the data bits in the mantissa portion of the input parameter.
For example, operators in a network model (e.g., a neural network) generally refer to the basic mathematical operations or operations used in the model network layer. These operators are used to build the various layers and components of the network, enabling the transfer, conversion and computation of data. They are the basic building blocks of the network model, defining the structure and operational flow of the model, including input, output and intermediate computation. In the model, the connection relation between operators forms a directed graph reflecting the calculation sequence of different operations in the model. By combining these operators, a complex and powerful neural network model can be constructed for processing various complex tasks and data. The precision conversion operator may be an operator for performing precision conversion operations according to certain rounding rules, the precision conversion operator may have its own input and output tensors, and possibly some adjustable parameters for controlling its behavior.
After the intermediate parameters are obtained, the intermediate parameters are subjected to secondary precision conversion.
For example, in some embodiments, performing a second precision conversion on the intermediate parameter based on bit a+1 in the mantissa portion of the input parameter to obtain the output parameter may include: and carrying out rounding operation on the intermediate parameter according to the a+1 bit in the mantissa part of the input parameter in combination with a second rounding rule to obtain the output parameter, wherein the second rounding rule is a digital reduction rule which is preset and used for converting the precision format of the intermediate parameter into the precision format of the output parameter. For example, in some embodiments, in combination with the second rounding rule, rounding the intermediate parameter according to the a+1st bit in the mantissa portion of the input parameter to obtain the output parameter may include: in response to the a+1st bit in the mantissa portion of the intermediate parameter being a first value, taking the 1 st bit to the a st bit in the mantissa portion of the intermediate parameter directly as the mantissa portion of the output parameter; responsive to the a+1th bit in the mantissa portion of the intermediate parameter being a second value and at least one of the a+1th bit through the b th bit in the mantissa portion of the intermediate parameter being a second value, discarding the a+1th bit through the b th bit in the mantissa portion of the intermediate parameter and performing a carry operation on the 1 st bit through the a th bit in the mantissa portion of the intermediate parameter to obtain a mantissa portion of the output parameter; responsive to the a+1th bit in the mantissa portion of the intermediate parameter being the second value and the a+2th bit to the b th bit in the mantissa portion of the intermediate parameter being the first value, determining the mantissa portion of the output parameter from the a-th bit in the mantissa portion of the intermediate parameter.
For example, in some embodiments, determining the mantissa portion of the output parameter from the a-th bit in the mantissa portion of the intermediate parameter may include: determining bits 1 to a in the mantissa portion of the intermediate parameter to be directly used as the mantissa portion of the output parameter in response to bit a in the mantissa portion of the intermediate parameter being a first value; in response to the a-th bit in the mantissa portion of the intermediate parameter being a second value, discarding the a+1st bit to the b-th bit in the mantissa portion of the intermediate parameter, and performing a carry operation on the 1 st bit to the a-th bit in the mantissa portion of the intermediate parameter to obtain the mantissa portion of the output parameter.
For example, the second rounding rule may provide that, when the a+1th bit in the mantissa portion of the intermediate parameter is 0, the a+1th bit to the b th bit in the mantissa portion of the intermediate parameter is discarded, and the 1 st bit to the a th bit are directly used as the mantissa portion of the output parameter; when the a+1th bit in the mantissa part of the intermediate parameter is 1, if any one bit from the a+2th bit to the b th bit in the mantissa part of the intermediate parameter is 1, carrying out carry operation on the 1 st bit to the a th bit in the mantissa part of the intermediate parameter, namely adding 1 to the a th bit to obtain the mantissa part of the output parameter, if the a+2th bit to the b th bit in the mantissa part of the intermediate parameter is 0 and the a th bit in the mantissa part of the intermediate parameter is 0, discarding the a+1th bit to the b th bit in the mantissa part of the intermediate parameter, taking the 1 st bit to the a th bit as the mantissa part of the output parameter directly, and if the a+2th bit to the b th bit in the mantissa part of the intermediate parameter is 0 and the a th bit in the mantissa part of the intermediate parameter, carrying out carry operation on the 1 st bit to the a th bit in the mantissa part of the intermediate parameter, namely adding 1 to the a th bit in the mantissa part to obtain the mantissa part of the output parameter.
Of course, the second rounding rule may also set a different rounding operation as desired, which is not particularly limited by the present disclosure. According to different rounding operations, the process of performing precision conversion specifically in combination with the second rounding rule may be adaptively adjusted with reference to the above embodiment, which is not described herein in detail.
Since the values of the data bits that can cause the precision conversion errors have already been set in the first precision conversion, the rounding operation can be performed directly in conjunction with the second rounding rule in the second precision conversion.
For example, as described above, when the a+1 bit in the mantissa portion of the intermediate parameter is 0, the 1 st bit to the a bit are reserved as the mantissa portion of the output parameter, when the a+1 bit in the mantissa portion of the intermediate parameter is 1, if any one of the a+2 bit to the b bit in the mantissa portion of the intermediate parameter is 1 at this time, carry operation is performed to obtain the mantissa portion of the output parameter, and if all of the a+2 bit to the b bit in the mantissa portion of the intermediate parameter are 0 at this time, then corresponding rounding operation is performed according to the a bit in the mantissa portion of the intermediate parameter to obtain the mantissa portion of the output parameter. Further, the index of the intermediate parameter is taken as the index of the output parameter. For example, the rounding operation in the second precision conversion may be performed directly with existing precision conversion operators.
For example, in at least one embodiment of the present disclosure, the output parameter is the same as the reference output parameter obtained by performing one-time precision conversion on the input parameter according to the third rounding rule, that is, there is no error between the output parameter obtained by the precision conversion method provided in at least one embodiment of the present disclosure and the reference output parameter obtained by one-time precision conversion. For example, the mantissa digit of the reference output parameter is a, and the third rounding rule is a digital reduction rule that directly converts the precision format of the input parameter to the precision format of the output parameter.
For example, the third rounding rule specifies that, when bit a+1 in the mantissa portion of the input parameter is a first value, e.g., 0, bits 1 to a are reserved as the mantissa portion of the output parameter; when the a+1th bit in the mantissa portion of the input parameter is a second value, for example, 1, if the a+2th bit to the c nd bit in the mantissa portion of the input parameter are not all 0, then carry operation is performed to obtain the mantissa portion of the output parameter, if the a+2th bit to the c th bit in the mantissa portion of the input parameter are all 0 and the a-th bit is also 0, then the 1 st bit to the a-th bit are reserved as the mantissa portion of the output parameter, and if the a+2th bit to the c-th bit in the mantissa portion of the input parameter are all 0 and the a-th bit is 1, then carry operation is performed to obtain the mantissa portion of the output parameter.
For example, when the a+1st bit is 0 in the mantissa portion of the input parameter, the rounding operation performed according to the third rounding rule is to directly reserve the 1 st bit to the a-th bit. In the precision conversion method provided in at least one embodiment of the present disclosure, the b+1st bit is set to 0 in the first precision conversion, the 1 st bit to the b nd bit may be directly reserved in the first precision conversion, and the 1 st bit to the a nd bit may be directly reserved in the second precision conversion, so that the output parameter obtained by performing multiple precision conversions on the input parameter according to the precision conversion method provided in at least one embodiment of the present disclosure is identical to the reference output parameter obtained by performing one precision conversion on the input parameter according to the third rounding rule.
For example, when the a+1th bit is 1 and the a+2th bit to the c-th bit are not all 0 in the mantissa portion of the input parameter, the rounding operation performed according to the third rounding rule is to hold the 1 st bit to the a-th bit and add 1 to the a-th bit. In the precision conversion method provided in at least one embodiment of the present disclosure, the b+1th bit is set to 0 and at least 1 bit from the a+2th bit to the b bit in the mantissa portion of the input parameter is set to 1, so that the 1 st bit to the b bit is directly reserved in the first precision conversion, and the 1 st bit to the a bit is reserved and the 1 st bit is added to the a bit in the second precision conversion because the a+2th bit to the b bit is not 0, so that the output parameter and the reference output parameter obtained in the precision conversion method provided in at least one embodiment of the present disclosure are identical.
For example, when the a+1th bit is 1 and all of the a+2th to c-th bits are 0 in the mantissa portion of the input parameter, the rounding operation performed according to the third rounding rule is to reserve the 1 st to a-th bits, and it is determined whether to perform the carry operation according to the a-th bit. In the precision conversion method provided in at least one embodiment of the present disclosure, the 1 st bit to the b bit are directly reserved as the intermediate parameter in the first precision conversion, and since the (a+2) th bit to the b bit are all 0, the 1 st bit to the a bit are reserved in the second precision conversion, and whether to perform the carry operation is determined according to the a bit, the output parameter and the reference output parameter obtained by the precision conversion method provided in at least one embodiment of the present disclosure are identical.
Therefore, in the precision conversion method provided in at least one embodiment of the present disclosure, individual data bits in the mantissa portion of the original high precision data may be adjusted according to the rounding rule, and the original precision conversion operator including multiple precision conversions may be continuously multiplexed, so that the result of multiple precision conversions is the same as the result of one precision conversion, so that the error is eliminated at the lowest cost, no new precision conversion operator having no error with the result of one precision conversion is required to be provided, and the research and development cost is reduced.
Of course, the precision conversion operator or the precision conversion method provided in at least one embodiment of the present disclosure may also be used to convert a low-precision input parameter into a high-precision output parameter, for example, to supplement 0 in the low order of the mantissa portion of the low-precision input parameter until the mantissa number is extended to the mantissa number specified by the output parameter.
Two specific examples of accuracy conversion using the accuracy conversion method provided by at least one embodiment of the present disclosure are specifically described below.
For example, in one embodiment, the precision type of the input parameter obtained is FP32, whose mantissa number is 23, as shown in table 1. In the first precision conversion, FP32 is converted to BF20, whose mantissa digit is 11, as shown in table 1. In the second precision conversion, BF20 is converted to BF16, whose mantissa number is 7, as shown in table 1.
Table 2 is a third rounding rule for converting FP32 directly to BF 16.
Table 2 third rounding rules
As shown in table 2, the third rounding rule specifies that, if bit 8 in the mantissa portion of FP32 is 0, the first 7 bits of the mantissa portion are directly reserved as the mantissa portion of BF16 after precision conversion; if the 8 th bit is 1 and the 9 th to 23 th bits are not all 0 in the mantissa part of the FP32, the first 7 bits of the mantissa part are reserved, and the 7 th bit is added with 1 to be used as the mantissa part of the BF16 after precision conversion; if the 8 th bit is 1 and the 9 th to 23 rd bits are all 0 in the mantissa portion of FP32, then directly reserving 1 to 7 th bits as the mantissa portion of BF16 after precision conversion when the 7 th bit is 0, reserving the first 7 th bit and adding 1 to the 7 th bit as the mantissa portion of BF16 after precision conversion when the 7 th bit is 1.
Table 3 illustrates specific operations in a multiple precision conversion process provided by at least one embodiment of the present disclosure.
TABLE 3 conversion operation of multiple precision conversions
For example, as shown in table 3, in the first precision conversion, if the 8 th bit in the mantissa portion of the input parameter is 0, the 12 th bit in the mantissa portion of the input parameter is set to 0, so as to obtain a first intermediate parameter; then, in combination with the first rounding rule, since the 12 th bit in the mantissa portion of the first intermediate parameter is 0, the 12 th bit to the 23 rd bit in the mantissa portion of the first intermediate parameter is truncated, and the first 11 bits in the mantissa portion of the first intermediate parameter are used as intermediate parameters with precision format of BF 20.
In the second precision conversion, since the 8 th bit of the intermediate parameter is 0, the first 7 bits of the intermediate parameter are taken as the mantissa portion of the output parameter.
Referring to table 2, the above-described two-precision conversion operation is identical to the precision conversion operation of case 1 in table 2, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, as shown in table 3, in the first precision conversion, if the 8 th bit in the mantissa portion of the input parameter is 1 and the 9 th to 23 rd bits of the input parameter are not all 0, setting the 12 th bit in the mantissa portion of the input parameter to 0, and setting any one of the 9 th to 11 th bits in the mantissa portion of the input parameter to 1, obtaining the second intermediate parameter; then, in combination with the first rounding rule, since the 12 th bit in the mantissa portion of the second intermediate parameter is 0, the 12 th bit to the 23 rd bit in the mantissa portion is truncated, and the first 11 bits in the mantissa portion of the second intermediate parameter are used as intermediate parameters with precision format of BF 20.
In the second precision conversion, since the 8 th bit is 1 and the 9 th to 11 th bits are not all 0 in the mantissa portion of the intermediate parameter, the first 7 bits of the mantissa portion of the intermediate parameter are reserved and carry operation (1 is added to the 7 th bit) is performed, resulting in the mantissa portion of the output parameter.
Referring to table 2, the above-described two-precision conversion operation is identical to the precision conversion operation of case 2 in table 2, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, as shown in table 3, in the first precision conversion, if the 8 th bit in the mantissa portion of the input parameter is 1 and the 9 th bit to the 23 rd bit of the input parameter are all 0, in combination with the first rounding rule, the 12 th bit to the 23 rd bit in the mantissa portion of the first intermediate parameter is truncated due to the 12 th bit in the mantissa portion being 0, and the first 11 bits in the mantissa portion of the intermediate parameter are regarded as the intermediate parameter in the precision format BF 20.
In the second precision conversion, since the 9 th bit to 11 th bit are all 0, a specific rounding operation is determined from the 7 th bit.
For example, if bit 7 is 0, the first 7 bits of the intermediate parameter are used as the mantissa portion of the output parameter. Referring to table 2, the above-described two-precision conversion operation is identical to the precision conversion operation of case 3 in table 2, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, if bit 7 is 1, the first 7 bits of the mantissa portion of the intermediate parameter are reserved and carry operations (1 is added to bit 7) are performed, resulting in the mantissa portion of the output parameter. Referring to table 2, the above-described two-precision conversion operation is identical to the precision conversion operation of case 4 in table 2, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, in another embodiment, the precision type of the input parameter obtained is FP32, whose mantissa number is 23, as shown in table 1. In the first precision conversion, FP32 is converted to BF24, whose mantissa number is 15, as shown in table 1. In the second precision conversion, BF24 is converted to BF19, whose mantissa number is 10, as shown in table 1.
Table 4 is a third rounding rule for directly converting FP32 to BF 19.
TABLE 4 third rounding rules
As shown in table 4, the third rounding rule specifies that, if bit 11 in the mantissa portion of FP32 is 0, the first 10 bits of the mantissa portion are directly reserved as the mantissa portion of BF19 after precision conversion; if the 11 th bit is 1 and the 12 th to 23 rd bits are not all 0 in the mantissa part of the FP32, the first 10 bits of the mantissa part are reserved, and the 10 th bit is added with 1 to be used as the mantissa part of the BF19 after precision conversion; if the 11 th bit is 1 and the 12 th to 23 th bits are all 0 in the mantissa portion of FP32, when the 10 th bit is 0, the 1 th to 10 th bits are directly reserved as the mantissa portion of BF19 after precision conversion, and when the 10 th bit is 1, the first 10 th bits are reserved and the 10 th bit is added by 1 as the mantissa portion of BF19 after precision conversion.
Table 5 illustrates specific operations in a multiple precision conversion process provided by at least one embodiment of the present disclosure.
TABLE 5 conversion operation for multiple precision conversions
For example, as shown in table 5, in the first precision conversion, if the 11 th bit in the mantissa portion of the input parameter is 0, the 16 th bit in the mantissa portion of the input parameter is set to 0, so as to obtain a first intermediate parameter; then, in combination with the first rounding rule, since the 16 th bit in the mantissa portion of the first intermediate parameter is 0, the 16 th bit to the 23 rd bit in the mantissa portion of the first intermediate parameter is truncated, and the first 15 bits in the mantissa portion of the first intermediate parameter are used as intermediate parameters with precision format of BF 24.
In the second precision conversion, since the 11 th bit of the intermediate parameter is 0, the first 10 bits of the intermediate parameter are taken as the mantissa portion of the output parameter.
Referring to table 4, the above-described two-precision conversion operation is identical to the precision conversion operation of case 1 in table 4, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, as shown in table 5, in the first precision conversion, if the 11 th bit in the mantissa portion of the input parameter is 1 and the 12 th bit to the 23 rd bit of the input parameter are not all 0, the 16 th bit in the mantissa portion of the input parameter is set to 0, and any one of the 12 th bit to the 15 th bit in the mantissa portion of the input parameter is set to 1 (for example, the 12 th bit is set to 1), the second intermediate parameter is obtained; then, in combination with the first rounding rule, since the 16 th bit in the mantissa portion of the second intermediate parameter is 0, the 16 th bit to the 23 rd bit in the mantissa portion is truncated, and the first 15 bits in the mantissa portion of the second intermediate parameter are used as intermediate parameters with precision format of BF 24.
In the second precision conversion, since the 11 th bit is 1 and the 12 th to 15 th bits are not all 0 in the mantissa portion of the intermediate parameter, the first 10 bits of the mantissa portion of the intermediate parameter are reserved and carry operation (1 is added to the 10 th bit) is performed, resulting in the mantissa portion of the output parameter.
Referring to table 4, the above-described two-precision conversion operation is identical to the precision conversion operation of case 2 in table 4, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, as shown in table 5, in the first precision conversion, if the 11 th bit in the mantissa portion of the input parameter is 1 and the 12 th bit to the 23 rd bit of the input parameter are all 0, in combination with the first rounding rule, the 16 th bit to the 23 rd bit in the mantissa portion of the first intermediate parameter is truncated due to the 16 th bit in the mantissa portion being 0, and the first 15 bits in the mantissa portion of the intermediate parameter are regarded as the intermediate parameter in the precision format BF 24.
In the second precision conversion, since the 12 th bit to the 15 th bit are all 0, a specific rounding operation is determined from the 10 th bit.
For example, if the 10 th bit is 0, the first 10 bits of the intermediate parameter are used as the mantissa portion of the output parameter. Referring to table 4, the above-described two-precision conversion operation is identical to the precision conversion operation of case 3 in table 4, and thus the output parameter and the reference output parameter can obtain completely identical results.
For example, if bit 10 is 1, the first 10 bits of the mantissa portion of the intermediate parameter are reserved and carry operations (1 is added to bit 10) are performed, resulting in the mantissa portion of the output parameter. Referring to table 4, the above-described two-precision conversion operation is identical to the precision conversion operation of case 4 in table 4, and thus the output parameter and the reference output parameter can obtain completely identical results.
In the above embodiment, according to the a+1th bit in the mantissa portion of the input parameter, individual data bits in the mantissa portion of the original high-precision data are adjusted, and then the original precision conversion operator can be used to continue to perform multiple precision conversion, so that the original precision conversion operator including multiple precision conversion can be continuously multiplexed, the result of multiple precision conversion is the same as the result of one precision conversion, so that errors are eliminated at the lowest cost, a new precision conversion operator having no error with the result of one precision conversion is not required to be provided, and research and development cost is reduced.
At least one embodiment of the present disclosure further provides an accuracy conversion device. Fig. 2 is a schematic block diagram of an accuracy conversion device according to at least one embodiment of the present disclosure.
As shown in fig. 2, the precision conversion apparatus 100 includes an acquisition module 101 and a precision conversion module 102.
For example, the acquisition module 101 is configured to acquire input parameters. For example, the input parameter is a floating point number.
For example, the precision conversion module 102 is configured to perform precision conversion on the input parameter at least twice according to the (a+1) th bit in the mantissa portion of the input parameter, to obtain an output parameter, where the mantissa bit number of the output parameter is a, and a is a positive integer.
For example, in each precision conversion, the mantissa number of the parameter before the precision conversion is larger than the mantissa number of the parameter after the precision conversion.
For example, the output parameters may be output directly from the accuracy conversion device 100, and transmitted to other components that need to use the output parameters, such as a storage device or other computing device.
For example, the acquisition module 101, the precision conversion module 102 includes code and programs stored in a memory, the acquisition module 101, the precision conversion module 102 being implemented, for example, as a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, which may be a general purpose processor, and also a single chip, microprocessor, digital signal processor, dedicated image processing chip, or field programmable logic array, etc., the acquisition module 101, the precision conversion module 102 executing the code and programs to implement some or all of the functions of the acquisition module 101, the precision conversion module 102 as described above. For example, the acquisition module 101, the accuracy conversion module 102 may be one circuit board or a combination of circuit boards for realizing the functions as described above. In an embodiment of the present application, the circuit board or the combination of the circuit boards may include: (1) one or more processors; (2) One or more non-transitory memories coupled to the processor; and (3) firmware stored in the memory that is executable by the processor.
It should be noted that, the obtaining module 101 may be used to implement step S10 shown in fig. 1, and the accuracy converting module 102 may be used to implement step S20 shown in fig. 1. Thus, for a specific description of the functions that can be implemented by the acquisition module 101 and the accuracy conversion module 102, reference may be made to the descriptions related to steps S10 to S20 in the above embodiment of the accuracy conversion method, and the repetition is not repeated. In addition, the accuracy conversion device 100 can achieve similar technical effects as the accuracy conversion method described above, and will not be described in detail herein.
It should be noted that, in at least one embodiment of the present disclosure, the precision conversion apparatus 100 may include more or less circuits or units, and the connection relationship between the respective circuits or units is not limited, and may be determined according to actual requirements. The specific configuration of each circuit or unit is not limited, and may be constituted by an analog device according to the circuit principle, a digital chip, or other applicable means.
For example, in some embodiments, the precision conversion apparatus 100 may be, for example, a precision conversion operator, which is an operation or function that changes the numerical precision during data processing or computation. The precision conversion operator may be used to convert a value from one precision to another to meet different computing or presentation requirements. For example, the accuracy conversion apparatus 100 may be implemented in hardware, software, or a combination of hardware and software, which is not particularly limited by the present disclosure.
At least one embodiment of the present disclosure also provides a data processing method. Fig. 3 is a schematic flow chart of a data processing method according to at least one embodiment of the present disclosure.
As shown in fig. 3, the data processing method at least includes steps S30 to S40.
In step S30, a precision conversion instruction is received. For example, the precision conversion instruction includes an input parameter.
In step S40, the precision conversion instruction is executed using the precision conversion unit after the precision conversion instruction is parsed.
For example, the precision conversion unit is configured to perform precision conversion at least twice.
For example, executing the precision conversion instruction using the precision conversion unit in step S40 may include: and performing precision conversion on the input parameter at least twice according to the (a+1) th bit in the mantissa part of the input parameter to obtain an output parameter, wherein the mantissa number of the output parameter is a, a is a positive integer, and in each precision conversion, the mantissa number of the parameter before the precision conversion is larger than the mantissa number of the parameter after the precision conversion.
For example, the data processing method provided by at least one embodiment of the present disclosure may be applied to the processor shown in fig. 4.
For example, in a data processing method provided in at least one embodiment of the present disclosure, a precision conversion instruction is provided, the precision conversion instruction including an input parameter. For example, after receiving the precision conversion instruction, the processor parses the precision conversion instruction, for example, decodes the precision conversion instruction, generates a micro instruction, and sends the micro instruction to the instruction allocation unit; the instruction distribution unit sends the micro instruction to a corresponding dispatch queue according to the micro instruction class; in response to the microinstruction, after the input parameters are ready, the input parameters are read by the precision conversion unit and the associated operations of the precision conversion instruction are performed.
As for a specific procedure of executing the precision conversion instruction using the precision conversion unit, reference may be made to steps S10 to S20 in the precision conversion method as described above, and the repetition is omitted.
The data processing method provided in at least one embodiment of the present disclosure may achieve similar technical effects as those of the precision conversion method described above, and will not be described herein.
Fig. 4 is a schematic block diagram of a processor provided in at least one embodiment of the present disclosure. As shown in fig. 4, the processor 200 includes an instruction parsing unit 201 and an accuracy conversion unit 202.
For example, the instruction parsing unit 201 is configured to receive and parse a precision conversion instruction, where the precision conversion instruction includes an input parameter.
For example, the precision conversion unit 202 executes the precision conversion method according to any of the embodiments of the present disclosure after the instruction analysis unit analyzes the precision conversion instruction.
Specifically, when upper-layer software (such as AI application, HPC application, scientific computing application, etc.) based on the processor can send an accuracy conversion instruction for computing processing to the processor (such as CPU or GPU) through a unified packaged function library, the accuracy conversion instruction can carry input parameters; when the processor receives the precision conversion instruction, the instruction parsing unit 201 parses the precision conversion instruction to obtain an input parameter, and the processor schedules the precision conversion unit to execute a precision conversion task on the output parameter. For example, after resolving the precision conversion instruction, the processor may store the input parameters in the precision conversion instruction in a register or a memory, so that the input parameters may be obtained from the register or the memory when the precision conversion unit 202 is performing the calculation processing.
Regarding the specific procedure of executing the precision conversion instruction using the precision conversion unit 202, reference may be made to steps S10 to S20 in the precision conversion method as described above, and the repetition is omitted.
The accuracy conversion method, the accuracy conversion apparatus, or the data processing method provided in at least one embodiment of the present disclosure may be applied to different systems or devices, such as the electronic device 300 shown in fig. 5. The electronic device 300 may be a terminal, such as a mobile phone terminal, a tablet computer, a notebook computer, an AR device, a VR device, a vehicle-mounted terminal, etc., or may be a server, etc. The accuracy conversion method provided in at least one embodiment of the present disclosure may be applied to a scenario involving accuracy conversion such as a CPU, a high performance computing (High Performance Computing, abbreviated as HPC), and an artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) in the electronic device 300, for example, a scalar computing unit, a vector computing unit, a matrix computing unit, and a tensor computing unit. Of course, the disclosure is not limited thereto, and any scenario, device, apparatus, etc. involving floating point number precision conversion may employ the precision conversion method or apparatus provided by at least one embodiment of the disclosure.
In some embodiments, the precision conversion device provided in at least one embodiment of the present disclosure may be a Chip, for example, a System-on-a-Chip (SoC). The system-on-chip includes a processor, which may be a single-core processor or a multi-core processor, memory, I/O interfaces, and the like. The processor may load the data and application programs in memory and then process the data, such as performing a computational process involving precision conversion.
For example, referring to the electronic device 300 shown in fig. 5, when implementing the accuracy conversion method provided by at least one embodiment of the present disclosure, the processing device 301 performs various suitable actions and processes according to non-transitory computer readable instructions stored in a memory to implement the accuracy conversion method. For example, the input parameters are stored in a memory, such as a register, a cache or a memory, and when the accuracy conversion is required, the processing device 301 performs at least two accuracy conversions on the input parameters according to step S20 in the accuracy conversion method according to at least one embodiment of the present disclosure, to obtain the output parameters. The output parameters may be transferred again to the corresponding operators, etc. for use, or transferred to a memory (e.g., high bandwidth memory) for storage.
In addition, it should be noted that, in the precision conversion method or the precision conversion device provided in at least one embodiment of the present disclosure, the parameter type is a floating point number, and the specific physical meaning of the parameter type may be different according to the application scenario. For example, the accuracy conversion method provided in at least one embodiment of the present disclosure may be applied to the fields of speech processing, image processing, text processing, video processing, and the like.
For example, in the field of speech processing, the parameters may be any parameters used, input, generated in tasks such as feature extraction, speech enhancement, speech recognition, etc., such as speech feature vectors, filtering parameters, etc.
For example, in the field of image processing, the parameters may be any parameters used, input, and generated in tasks such as image preprocessing, feature extraction, image segmentation, and object detection, such as image feature vectors, parameters used, input, and generated by various edge detection operators (such as Sobel operator, canny operator, prewitt operator, and the like), image filtering operators (such as gaussian filtering, median filtering, bilateral filtering, and the like), and morphological operators (such as erosion, dilation, open operation, and closed operation, and the like).
For example, in the field of text processing, a parameter may be any parameter used, entered, generated in the task of text classification, emotion analysis, text generation, etc., such as a semantic feature vector of text, etc.
For example, in the field of video processing, the parameters may be parameters in the field of image processing as described above, or parameters specific to the field of video processing that are used, input, generated, such as optical flow operators (for estimating motion between video frames), object tracking operators (for tracking specific objects in video), and the like.
Of course, the disclosure is not limited to this, and as long as the precision conversion of the floating point number is required for other application scenarios or fields, the precision conversion method described in at least one embodiment of the disclosure may be applied, and will not be described in detail herein.
Fig. 5 is a schematic block diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 5, the electronic device 300 is suitable for implementing, for example, a data processing method or an accuracy conversion method provided by an embodiment of the present disclosure. It should be noted that the components of the electronic device 300 shown in fig. 5 are exemplary only and not limiting, and that the electronic device 300 may have other components as desired for practical applications.
As shown in fig. 5, the electronic device 300 may include a processing apparatus (e.g., central processing unit, graphics processor, etc.) 301 that may perform various suitable actions and processes in accordance with non-transitory computer readable instructions stored in memory to achieve various functions.
For example, computer readable instructions, when executed by the processing device 301, may perform one or more steps of a data processing method or an accuracy conversion method according to any of the embodiments described above. It should be noted that, the detailed description of the processing procedure of the precision conversion method may refer to the related description in the embodiment of the precision conversion method, and the detailed description of the processing procedure of the data processing method may refer to the related description in the embodiment of the data processing method, and the repetition is omitted.
For example, the memory may comprise any combination of one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory may include, for example, random Access Memory (RAM) 303 and/or cache memory (cache) or the like, and computer readable instructions may be loaded from storage 308 into Random Access Memory (RAM) 303 to execute the computer readable instructions. The non-volatile memory may include, for example, read-only memory (ROM) 302, a hard disk, erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, flash memory, and the like. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
For example, the processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
In general, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 308 including, for example, magnetic tape, hard disk, flash memory, etc.; and communication means 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other electronic devices to exchange data. While fig. 5 shows the electronic device 300 with various means, it is to be understood that not all of the illustrated means are required to be implemented or provided, and that the electronic device 300 may alternatively be implemented or provided with more or fewer means. For example, the processing device 301 may control other components in the electronic device 300 to perform desired functions. The processing means 301 may be a Central Processing Unit (CPU), tensor Processor (TPU) or a graphics processor GPU or the like having data processing capabilities and/or program execution capabilities. The Central Processing Unit (CPU) may be an X86, ARM, RISC-V architecture, or the like. The GPU may be integrated directly into the SOC, directly onto the motherboard, or built into the north bridge chip of the motherboard.
Fig. 6 is a schematic diagram of a non-transitory computer readable storage medium according to at least one embodiment of the present disclosure. For example, as shown in fig. 6, the storage medium 400 may be a non-transitory computer-readable storage medium, and one or more computer-readable instructions 401 may be stored non-transitory on the storage medium 400. For example, computer readable instructions 401, when executed by a processor, may perform one or more steps in a precision conversion method, or one or more steps in a data processing method, according to the above.
For example, the storage medium 400 may be applied to the electronic device 300 described above, and for example, the storage medium 400 may include the storage device 308 in the electronic device 300.
For example, the storage device may comprise any combination of one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read-only memory (ROM), hard disk, erasable programmable read-only memory (EPROM), portable compact disc read-only memory (CD-ROM), USB memory, flash memory, and the like. One or more computer readable instructions may be stored on the computer readable storage medium that can be executed by a processor to perform various functions of the processor. Various applications and various data, etc. may also be stored in the storage medium.
For example, the storage medium may include a memory card of a smart phone, a memory component of a tablet computer, a hard disk of a personal computer, random Access Memory (RAM), read Only Memory (ROM), erasable Programmable Read Only Memory (EPROM), portable compact disc read only memory (CD-ROM), flash memory, or any combination of the foregoing, as well as other suitable storage media.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
For the purposes of this disclosure, the following points are also noted:
(1) The drawings of the embodiments of the present disclosure relate only to the structures related to the embodiments of the present disclosure, and other structures may refer to the general design.
(2) The embodiments of the present disclosure and features in the embodiments may be combined with each other to arrive at a new embodiment without conflict.
The foregoing is merely a specific embodiment of the disclosure, but the scope of the disclosure is not limited thereto and should be determined by the scope of the claims.