Disclosure of Invention
One or more embodiments of the present disclosure provide a tag circling method, apparatus, and computer device based on a large language model, where the method can implement automatic tag circling, and improve the efficiency of tag circling work.
In one aspect, a tag looping method based on a large language model is provided, the method comprising:
Acquiring label circling information, wherein the label circling information is used for indicating description information of an object to be circled;
Processing the tag circling information through a target large language model to generate a target circling strategy matched with the tag circling information, wherein the target circling strategy comprises at least one circling tag of the object to be circled and a tag value corresponding to the circling tag;
and performing label circling based on the circling label and the label value to obtain a target group, wherein the target group comprises at least one target circling object indicated by the label circling information.
In an embodiment of the present description, an automated label looping approach is provided: the user only needs to provide description information about the object to be circled, namely label circling information, the large language model can understand the circling intention in the label circling information, and the target circling strategy is automatically generated, namely the circling label and the label value are automatically generated for the subsequent circling target object. The aim of selecting the target object in a sentence circle can be fulfilled; the method has the advantages that the labels and the label values do not need to be manually selected by a user in the circle selection process, the complexity and the difficulty of the user for circle selection of the objects are reduced, and the efficiency of label circle selection work is improved; moreover, a target circling strategy for executing circling is generated through the large language model, the data processing capacity of the large language model is utilized to match circling labels, the user does not need to master the huge number of labels in the label library, and the difficulty of circling by the user is further reduced.
With reference to the first aspect, in some implementation manners of the first aspect, the processing, by the target large language model, the tag enclosure information to generate a target enclosure policy that matches the tag enclosure information includes: inputting the label circling information into a first large language model to identify the circling intention, and determining the target object category of the object to be circled indicated by the label circling information; carrying out knowledge recall based on the target object category to obtain at least one target circling knowledge, wherein the target circling knowledge is used for indicating label information and circling strategy information related to the object to be circled; and inputting the target circling knowledge and the tag circling information into a second large language model, and generating the target circling strategy matched with the tag circling information.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the inputting the target trapping knowledge and the tag trapping information into a second large language model generates the target trapping policy matched with the tag trapping information, including: splicing the target circling knowledge and the label circling information to obtain target prompt information; and inputting the target prompt information into the second large language model to obtain the target circling strategy output by the second large language model.
In the embodiment of the specification, by setting two large language models, the first large language model is used for identifying the circling selection intention, and the second large language model is used for generating the circling selection strategy; the user circle selection intention is deeply understood from the label circle selection information, and the generation accuracy of the follow-up circle selection strategy is improved; moreover, the circle selection intention is identified first, the generation range of the circle selection strategy can be further reduced, and the generation efficiency of the circle selection strategy is further improved.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the performing knowledge recall based on the target object category to obtain at least one target circle selection knowledge includes: extracting keywords from the label circling information to obtain target keywords, wherein the target keywords are words related to the object to be circled; and carrying out knowledge recall based on the target keywords and the target object categories to obtain at least one target circling knowledge.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the performing knowledge recall based on the target keyword and the target object category to obtain at least one target circling knowledge includes: selecting candidate tag knowledge under the target object category from a tag knowledge base, wherein tag information under each candidate object category is stored in the tag knowledge base; matching is carried out from the candidate tag knowledge based on the target keyword, so that first circle selection knowledge is obtained; matching is carried out from a sample knowledge base based on the target keywords, so as to obtain second circle selection knowledge, wherein the corresponding relation between history circle selection information and history circle selection strategies is stored in the sample knowledge base; and determining the first round selection knowledge and the second round selection knowledge as recalled target round selection knowledge.
In the embodiment of the specification, in order to further improve the generation accuracy of the circle selection strategy, a tag knowledge base and a sample knowledge base are also pre-constructed, so that similar tag knowledge and history samples can be recalled from the tag knowledge base and the sample knowledge base based on the target keywords and the circle selection intention extracted from the tag circle selection information, and richer prompt information is provided for the generation of the circle selection strategy, so that the generation accuracy of the circle selection strategy is further improved.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the processing, by the target large language model, the tag enclosure information to generate a target enclosure policy that matches the tag enclosure information includes: processing the tag circling information through the target large language model to generate at least two candidate circling strategies; determining candidate use heat of each candidate circle selection strategy in a target circle selection scene; and determining the candidate circle selection strategy with the highest candidate use heat as the target circle selection strategy.
In the embodiment of the specification, considering that a plurality of candidate circle selection strategies can be generated at the same time, the candidate circle selection strategies can be screened based on candidate use heat of the candidate circle selection strategies, so that the finally determined target circle selection strategies not only accord with label circle selection information, but also accord with requirements in a target circle selection scene, circle selection strategies which cannot be adopted in a real circle selection process are avoided being output, and determination accuracy of the target circle selection strategies is improved.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the performing tag sorting based on the sorting tag and the tag value to obtain a target group includes: displaying the generated target circling strategy; if a confirmation operation of the target circling strategy is received, performing label circling based on the target circling strategy to obtain the target group; the method further comprises the steps of: and if the modification operation of the target trapping strategy is received, acquiring the modified target trapping strategy, and carrying out label trapping based on the modified target trapping strategy to obtain the target group.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the method further includes: determining a target prediction loss of the target large language model based on a difference between the target trapping strategy and the modified target trapping strategy; and updating model parameters of the target large language model based on the target prediction loss.
In the embodiment of the present specification, in order to avoid the invalid circling process, after the target circling policy is generated, the target circling policy may also be displayed, so as to finally determine whether the subsequent circling process can be continuously performed by the user; if the target circling strategy is different from the user expectation, the user can directly modify the target circling strategy, so that the subsequent circling process can be performed according to the modified circling strategy; the accuracy of subsequent circle selection can be improved through simple confirmation operation; in addition, after the user modifies the circling strategy, the target large language model can be re-optimized based on the difference between the modified target circling strategy and the target circling strategy generated by the model, so that the accuracy of the subsequent large language model generating circling strategy is improved.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the method further includes: acquiring sample circling information and a sample circling category indicated by the sample circling information; inputting the sample circling information into the first large language model to identify the circling intention, and obtaining a predicted circling category output by the first large language model; training the first large language model based on a difference between the predicted circled category and the sample circled category.
In the embodiment of the present specification, in order to enable the first large language model to have the function of identifying the circling intention, the model is trained by using sample circling information and the corresponding sample circling category in the early stage, so that the first large language model learns the capability of extracting the sample circling category.
With reference to the first aspect and the foregoing implementation manner, in some implementation manners of the first aspect, the method further includes: acquiring sample circling information, sample circling knowledge and a sample circling strategy, wherein the sample circling strategy is a circling strategy corresponding to the sample circling information and the sample circling knowledge; splicing the sample circle selection information and the sample circle selection knowledge into sample prompt information; inputting the sample prompt information into the second large language model to obtain a prediction circle selection strategy output by the second large language model; training the second large language model based on differences between the predicted round-robin strategy and the sample round-robin strategy.
In the embodiment of the specification, in order to enable the second large language model to have the circle selection strategy generation capability, the model is trained by using sample circle selection information, sample circle selection knowledge and sample circle selection strategy in the early stage, so that the second large language model learns the capability of generating the circle selection strategy according to the prompt information.
In a second aspect, there is provided a tag population apparatus based on a large language model, the apparatus comprising:
The first acquisition module is used for acquiring label circling information, wherein the label circling information is used for indicating the description information of the object to be circled;
The first generation module is used for processing the tag circling information through a target large language model to generate a target circling strategy matched with the tag circling information, wherein the target circling strategy comprises at least one circling tag of the object to be circled and a tag value corresponding to the circling tag;
And the first circling module is used for circling the labels based on the circling labels and the label values to obtain a target group, wherein the target group comprises at least one target circling object indicated by the label circling information.
With reference to the second aspect, in certain implementations of the second aspect, the first generating module is further configured to: inputting the label circling information into a first large language model to identify the circling intention, and determining the target object category of the object to be circled indicated by the label circling information; carrying out knowledge recall based on the target object category to obtain at least one target circling knowledge, wherein the target circling knowledge is used for indicating label information and circling strategy information related to the object to be circled; and inputting the target circling knowledge and the tag circling information into a second large language model, and generating the target circling strategy matched with the tag circling information.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the first generating module is further configured to: splicing the target circling knowledge and the label circling information to obtain target prompt information; and inputting the target prompt information into the second large language model to obtain the target circling strategy output by the second large language model.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the first generating module is further configured to: extracting keywords from the label circling information to obtain target keywords, wherein the target keywords are words related to the object to be circled; and carrying out knowledge recall based on the target keywords and the target object categories to obtain at least one target circling knowledge.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the first generating module is further configured to: selecting candidate tag knowledge under the target object category from a tag knowledge base, wherein tag information under each candidate object category is stored in the tag knowledge base; matching is carried out from the candidate tag knowledge based on the target keyword, so that first circle selection knowledge is obtained; matching is carried out from a sample knowledge base based on the target keywords, so as to obtain second circle selection knowledge, wherein the corresponding relation between history circle selection information and history circle selection strategies is stored in the sample knowledge base; and determining the first round selection knowledge and the second round selection knowledge as recalled target round selection knowledge.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the first generating module is further configured to: processing the tag circling information through the target large language model to generate at least two candidate circling strategies; determining candidate use heat of each candidate circle selection strategy in a target circle selection scene; and determining the candidate circle selection strategy with the highest candidate use heat as the target circle selection strategy.
With reference to the second aspect and the foregoing implementation manners, in some implementation manners of the second aspect, the first round selection module is configured to: displaying the generated target circling strategy; if a confirmation operation of the target circling strategy is received, performing label circling based on the target circling strategy to obtain the target group; the apparatus further comprises: and the second circling module is used for acquiring the modified target circling strategy if receiving the modification operation of the target circling strategy, and performing label circling based on the modified target circling strategy to obtain the target group.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the apparatus further includes: a first determining module configured to determine a target prediction loss of the target large language model based on a difference between the target trapping policy and the modified target trapping policy; and the updating module is used for updating the model parameters of the target large language model based on the target prediction loss.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the apparatus further includes: the second acquisition module is used for acquiring sample circling information and sample circling categories indicated by the sample circling information; the second determining module is used for inputting the sample circling information into the first large language model to identify the circling intention, so as to obtain a predicted circling category output by the first large language model; and the first training module is used for training the first large language model based on the difference between the predicted circle classification and the sample circle classification.
With reference to the second aspect and the foregoing implementation manner, in some implementation manners of the second aspect, the apparatus further includes: the third acquisition module is used for acquiring sample circling information, sample circling knowledge and a sample circling strategy, wherein the sample circling strategy is a circling strategy corresponding to the sample circling information and the sample circling knowledge; the splicing module is used for splicing the sample circle selection information and the sample circle selection knowledge into sample prompt information; the third determining module is used for inputting the sample prompt information into the second large language model to obtain a prediction circle selection strategy output by the second large language model; and the second training module is used for training the second large language model based on the difference between the prediction circle selection strategy and the sample circle selection strategy.
In another aspect, a computer device is provided, the computer device including a processor and a memory having at least one program stored therein, the at least one program loaded and executed by the processor to implement the large language model based label sorting method as described in the above aspect.
In another aspect, a computer readable storage medium is provided, in which at least one program is stored, the at least one program being loaded and executed by a processor to implement a tag population method based on a large language model as described in the above aspect.
In another aspect, a computer program product is provided comprising instructions which, when run on the computer or processor, cause the computer or processor to perform the method of tag population based on a large language model in any one of the possible implementations of the first aspect or the first aspect.
Detailed Description
Technical solutions in one or more embodiments of the present specification will be clearly and thoroughly described below with reference to the accompanying drawings. Wherein in the description of one or more embodiments of the specification, "a plurality" means two or more than two. The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as implying or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The text "and/or" is merely an association relation describing the associated object, and indicates that three relations may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone.
Under the label circling scene, a user often needs to manually select a specific label value corresponding to an object to be circled, in a huge number of label libraries, the manual searching and selecting of required target labels are time-consuming and labor-consuming, the user is also required to be skilled in mastering various labels existing in the label libraries, and the complexity and difficulty of label circling work synchronously rise along with the gradual increase of the number of labels.
Aiming at the difficulty problem of manually selecting labels in the existing label circling process, the embodiment of the specification provides an automatic label circling process which can achieve the effect of a sentence circler. FIG. 1 is a schematic flow diagram of a tag population method based on a large language model provided in one or more embodiments of the present disclosure. The label circling method based on the large language model provided by the embodiment of the specification can be applied to computer equipment.
Illustratively, as shown in FIG. 1, the method 100 includes:
step 102, acquiring label circling information, wherein the label circling information is used for indicating description information of an object to be circled.
Considering that when the related technology is used for selecting the label in a circle, a user needs to manually select a specific label and a label value, and under the condition that the user does not know the label and the label value corresponding to the object to be selected in a circle, the circle is often wrong, and the circle selection difficulty is increased. In order to reduce the circling difficulty and improve the circling accuracy, the embodiment of the specification utilizes the strong understanding capability of the large language model to natural language, a user only needs to provide description information about the object to be circled, and the large language model can automatically generate a target circling strategy, namely automatically generate a circling label and a label value for the subsequent circling target object, without manually selecting the label and the label value by the user, so that the complexity and the difficulty of the user to circling the object are reduced.
The improved label looping process is as follows: the user inputs a piece of description information related to the object to be circled, wherein the description information can comprise the object characteristics of the object to be circled, and can also comprise the circling mode requirement when the object to be circled is circled, for example, the intersection of two types of objects to be circled is taken; the corresponding computer device can acquire the label circling information, and then generate a matched target circling strategy based on the label circling information.
Optionally, the tag circling information may be obtained by acquiring text information input by the user, or may be obtained by acquiring a voice circling instruction of the user, which corresponds to the tag circling information that converts the voice circling instruction into text.
And 104, processing the label circling information through the target large language model to generate a target circling strategy matched with the label circling information, wherein the target circling strategy comprises at least one circling label of the object to be circled and a label value corresponding to the circling label.
Because the label value corresponding to the object to be selected is not directly included in the label selecting information, the selected object still needs to be matched according to the selected label and the label value in the subsequent selecting process. In order to understand the circling strategy indicated in the label circling information input by the user, a target large language model is arranged in the computer equipment, natural language understanding and processing are carried out on the label circling information through the target large language model, the target circling strategy matched with the label circling information is used for generating, and the target circling strategy comprises at least one circling label of the object to be circled and a label value corresponding to the circling label.
For example, if the input tag circle information is "the type of merchandise desired to be known to the barbecue store", the target circle policy generated based on the tag circle information may be "store-barbecue+merchandise-all"; based on the target circle selection strategy, the circle selection label is known to be a shop and a commodity, the label value corresponding to the label of the shop is barbecue, and the label value corresponding to the label of the commodity is 'all'.
And 106, performing label circling based on the circled labels and the label values to obtain a target group, wherein the target group comprises at least one target circled object indicated by the label circling information.
After generating the target circling strategy, the computer device may directly perform label circling from the object database based on the circling labels and the label values, so as to obtain a plurality of target circling objects indicated by the label circling information, and combine the target circling objects into a target group.
Optionally, after the target group is generated, a subsequent circle release task may be performed, for example, a plurality of specific APP users are circled, and coupons may be issued to APP users in the target group; alternatively, the insight report corresponding to the target group may be generated according to each target circled object in the target group.
In summary, the embodiment of the present disclosure provides an automatic label ring selection method: the user only needs to provide description information about the object to be circled, namely label circling information, the large language model can understand the circling intention in the label circling information, and the target circling strategy is automatically generated, namely the circling label and the label value are automatically generated for the subsequent circling target object. The aim of selecting the target object in a sentence circle can be fulfilled; the manual selection of the labels and the label values by the user is not needed in the circle selection process, and the complexity and difficulty of the user for circle selection of the objects are reduced; moreover, a target circling strategy for executing circling is generated through the large language model, the data processing capacity of the large language model is utilized to match circling labels, the user does not need to master the huge number of labels in the label library, and the difficulty of circling by the user is further reduced.
In order to improve the generation accuracy of the circle selection strategy, two large language models are arranged, circle selection intention recognition and circle selection strategy generation are respectively executed, and after circle selection intention recognition, tag knowledge recall is introduced, so that information required by circle selection strategy generation can be further enriched, and the generation accuracy of the circle selection strategy is improved.
FIG. 2 is a schematic flow diagram of another method of label containment based on a large language model provided in one or more embodiments of the present disclosure. The label circling method based on the large language model provided by the embodiment of the specification can be applied to computer equipment.
Illustratively, as shown in FIG. 2, the method 200 includes:
Step 202, acquiring label circling information, wherein the label circling information is used for indicating description information of an object to be circled.
The implementation of step 202 may refer to step 102, and this embodiment is not described herein.
And 204, inputting the label circling information into the first large language model to identify the circling intention, and determining the target object category of the object to be circled indicated by the label circling information.
The target large language model comprises a first large language model and a second large language model, wherein the first large language model is used for carrying out circle selection intention recognition, and the second large language model is used for carrying out circle selection strategy generation. Illustratively, the first large language model may employ LLM, the second large language model may employ ChatGPT 3.5.5, LLM, etc.
Since different types of objects may have similar tags, for example, a merchant corresponds to a barbecue tag, and a barbecue commodity corresponds to a barbecue tag, in order to improve the generation accuracy of a subsequent circling strategy and avoid errors of the subsequent circling object, a first large language model is first provided for performing natural language understanding on the input tag circling information so as to identify the circling intention of the user. And correspondingly inputting the label circling information into the first large language model to identify the circling intention so as to output the target object category of the object to be circled indicated by the label circling information.
Specifically, a plurality of candidate object categories are preset, the first large language model is used for identifying candidate probabilities that the object category indicated in the tag circling information is each candidate object category, and then the target object category of the object to be circled indicated in the tag circling information is determined based on the candidate probabilities. The determined target object category may be one or more considering a circled scene in which multiple classes of objects exist.
By way of example, the candidate object categories may include: goods, merchants, users, etc. And there may be differences in candidate object categories included in different application scenarios.
Optionally, when determining the target object category based on the candidate probability, a probability threshold is set, and the candidate object category with the candidate probability larger than the probability threshold is determined as the target object category; or directly determining the candidate object category with the largest candidate probability as the target object category.
And 206, carrying out knowledge recall based on the target object category to obtain at least one target circling knowledge, wherein the target circling knowledge is used for indicating label information and circling strategy information related to the object to be circled.
In order to improve the accuracy of policy generation, a tag knowledge base and a sample knowledge base are constructed in advance, tag information of various candidate objects is stored in the tag knowledge base, and historical circle selection information and used tag combinations (namely, tag combination, namely, tag circle selection policies) are stored in the sample knowledge base. And the method can generate supplementary relevant label information and relevant circle selection strategy information for the subsequent strategy by carrying out knowledge recall from the label knowledge base and the sample knowledge base.
Specifically, the construction process of the tag knowledge base may be: and collecting label information of each user, merchant and commodity in the platform, wherein the label information comprises label names, label descriptions and label values, summarizing the label descriptions by performing abstract on LLM, extracting effective information, and finally forming a structured label knowledge base. Such as a table-form tag knowledge base.
The construction process of the sample knowledge base can be as follows: and collecting some historical circle selection groups and corresponding circle selection requirement information, determining the circle selection requirement information as historical circle selection information, determining group labels of the historical circle selection groups as label combinations, and storing the label combinations in a sample knowledge base in an associated manner.
Further, in the label circling application process, knowledge recall is performed from a pre-constructed knowledge base based on the target object category to obtain at least one target circling knowledge related to target circling strategy generation, wherein the target circling knowledge is used for indicating label information and circling strategy information related to the object to be circled.
In order to improve accuracy of recall knowledge and reduce the circling knowledge with low recall association, the keyword extraction is further carried out on the tag circling information to obtain target keywords related to the object to be circled in the tag circling information, so that knowledge recall is carried out by combining the target keywords and the target object category together to obtain at least one target circling knowledge.
Considering that two knowledge bases exist and the information stored in the two knowledge bases are different, the specific recall modes also have differences in knowledge recall. Correspondingly, in an exemplary example, the step of obtaining at least one target selected knowledge based on the target keyword and the target object category may further include steps 206A-206D.
Step 206A, selecting candidate tag knowledge under the target object category from a tag knowledge base, wherein tag information under each candidate object category is stored in the tag knowledge base.
Because the tag knowledge base stores the tag information of various candidate objects, in order to recall the target circling knowledge related to the tag circling information, the candidate tag knowledge under the same category is recalled from the tag knowledge base based on the category of the target object, and at least the target circling knowledge obtained by the tag circling information is screened.
For example, if tag information corresponding to three types of candidate object categories, namely, a user, a commodity and a merchant is stored in the tag knowledge base, and the target object category is the commodity, the tag information under the commodity is recalled firstly based on the target object category and is used as the candidate tag information.
And 206B, matching from the candidate tag knowledge based on the target keyword to obtain first circle selection knowledge.
The recalled candidate tag knowledge is only tag knowledge under the same category, and a lot of tag information which is irrelevant to the tag information is contained in the recalled candidate tag knowledge, so that in order to remove the irrelevant tag information, matching is carried out from the candidate tag knowledge based on the target keywords, and the first circle of tag information is obtained through screening. Specifically, when matching is performed, the target keyword and the candidate tag knowledge can be respectively subjected to characterization processing by using a vector model, and the candidate tag knowledge with higher vector similarity with the target keyword, for example, the candidate tag knowledge with higher vector similarity than the similarity threshold value is recalled as the first round of selection knowledge by comparing the vector similarity between the vector of the target keyword and the vector of each candidate tag knowledge after the characterization processing.
And 206C, matching from a sample knowledge base based on the target keywords to obtain second circle selection knowledge, wherein the corresponding relation between the history circle selection information and the history circle selection strategy is stored in the sample knowledge base.
In addition to using the target keywords to match from the tag knowledge base, matching from the sample knowledge base using the target keywords is also required to match second round-robin knowledge similar to the target keywords, the second round-robin knowledge being round-robin policy information related to tag round-robin information. Specifically, when matching is performed, a vector model may be used to perform characterization processing on each historical sample in the target keyword and sample knowledge base, and by comparing the vector similarity between the vector of the target keyword and the vector of each historical sample after the characterization processing, a historical sample with higher vector similarity with the target keyword, for example, a historical sample with higher recall vector similarity than a similarity threshold value is recalled and used as the second round selection knowledge.
The history samples are history circle selection information and history circle selection strategies which are stored in a sample knowledge base and have association relations.
Step 206D, determining the first round selection knowledge and the second round selection knowledge as recalled target round selection knowledge.
After recalling the first round of knowledge (tag information) from the tag knowledge base and recalling the second round of knowledge (history samples for embodying the round of strategy) from the sample knowledge base, the first round of knowledge and the second round of knowledge can be jointly determined as recalled target round of knowledge for providing a supplement of knowledge information for the subsequent generation of the round of strategy.
And step 208, inputting the target circling knowledge and the label circling information into a second large language model, and generating a target circling strategy matched with the label circling information.
The second largest language model is deployed in the computer equipment and is used for generating a label ring selection strategy. And correspondingly, the tag circling information and recalled target circling knowledge can be input into a large language model, and the circling intention in the tag circling information and the circling tag information provided by the target circling knowledge are analyzed by the large language model so as to generate a target circling strategy matched with the tag circling information. The target round-robin strategy may be a combination of tags including at least one round-robin tag and its corresponding tag value.
In an illustrative example, step 208 may also include step 208A and step 208B.
And step 208A, splicing the target circling knowledge and the label circling information to obtain target prompt information.
And step 208B, inputting the target prompt information into the second large language model to obtain a target circling strategy output by the second large language model.
Because the second large language model is a prompt learning model, when the target circle selection strategy is generated, firstly, the target circle selection knowledge and the label circle selection information are spliced to obtain target prompt information, and then the target prompt information is input into the second large language model to prompt the second large language model to generate the target circle selection strategy related to the label circle selection information.
And 210, performing label circling based on the circled labels and the label values to obtain a target group, wherein the target group comprises at least one target circled object indicated by the label circling information.
In order to avoid the error of the circle selection, after the target circle selection strategy is generated, the generated target circle selection strategy can be displayed so that a user can confirm whether the target circle selection strategy is wrong or not. Correspondingly, if a confirmation operation of the target circling strategy is received, label circling can be performed based on circling labels and label values in the target circling strategy to obtain a target group; otherwise, if a modification operation on the target circling strategy is received, correspondingly acquiring the modified target circling strategy, and performing label circling based on the modified target circling strategy to obtain the target circling.
FIG. 3 is a schematic diagram of a label wrapping process provided in one or more embodiments of the present disclosure. As shown in fig. 3, after the computer device acquires the tag circling information 301, the tag circling information 301 is input into the first large language model 302 to identify the circling intention, so as to obtain a target object category 303 of the object to be circled indicated by the tag circling information 301; meanwhile, extracting target keywords 304 related to the object to be selected from the tag selection information 301, and carrying out tag knowledge recall from a tag knowledge base and a sample knowledge base based on the target object category 303 and the target keywords 304 to obtain target selection knowledge 305; further, the target circling knowledge 305 and the tag circling information 301 are spliced to obtain target prompt information 306, and the target prompt information 306 is input into a second large language model 307, and a target circling strategy 308 is generated under the prompt of the target prompt information 306; and then performs object gating according to the object gating policy 308 to generate the object group 309.
In the embodiment of the specification, through setting two large language models, the first large language model is used for identifying the circling selection intention, and the second large language model is used for generating the circling selection strategy; the user circle selection intention is deeply understood from the label circle selection information, and the generation accuracy of the follow-up circle selection strategy is improved; moreover, the circle selection intention is identified first, the generation range of the circle selection strategy can be further reduced, and the generation efficiency of the circle selection strategy is further improved.
In order to further improve the generation accuracy of the circle selection strategy, a tag knowledge base and a sample knowledge base are also pre-constructed, so that similar tag knowledge and history samples can be recalled from the tag knowledge base and the sample knowledge base based on the target keywords and the circle selection intention extracted from the tag circle selection information, and richer prompt information is provided for the circle selection strategy generation, so that the generation accuracy of the circle selection strategy is further improved.
In order to enable the first large language model to have an accurate intention recognition function in the application process of label circle selection, the first large language model needs to be trained in advance, and the training process of the first large language model is exemplified in the embodiment.
FIG. 4 is a schematic flow diagram of one or more embodiments of the present description provide for training a first large language model. The method may be applied to a computer device.
Illustratively, as shown in FIG. 4, the method 400 includes:
Step 402, sample circling information and a sample circling category indicated by the sample circling information are obtained.
Before training the first large language model, a training sample of the first large language model needs to be prepared first. In order to enable the first large language model to accurately identify the circling category indicated by the circling intention from the input text, a plurality of sample circling information can be pre-selected and acquired, and the sample circling category indicated by each sample circling information is manually marked, wherein the sample circling category is the sample object category of the object to be circled. That is, the training samples of the first large language model include a number of first sample pairs, each first sample pair including sample gating information and its associated sample gating category.
And step 404, inputting the sample circling information into the first large language model to identify the circling intention, and obtaining the predicted circling category output by the first large language model.
In each round of training, the sample circle selection information is input into an initial first large language model to identify the circle selection intention, and the first large language model identifies the predicted circle selection category (i.e. the predicted circle selection category, i.e. the predicted object category) indicated by the sample circle selection information.
Step 406, training the first large language model based on the difference between the predicted circle category and the sample circle category.
If the difference between the predicted circle selection category and the manually marked sample circle selection category is large, the first large language model does not accurately extract the circle selection intention; if the difference between the predicted circle classification and the manually marked sample circle classification is smaller, the first large language model can accurately extract the circle classification intention. And in order to enable the first large language model to learn to accurately extract the object category of the object to be selected from the sample selection information, the first large language model is iteratively trained by comparing the difference between the predicted selection category and the sample selection category as a loss function until the value of the loss function is smaller than a preset value, and the first large language model is trained.
FIG. 5 is a schematic diagram of a training process for a first large language model provided in one or more embodiments of the present disclosure. As shown in fig. 5, when training the first large language model, firstly, a training data set of the first large language model is acquired, wherein the training data set comprises a plurality of training data pairs, and each training data pair comprises sample circle selection information 501 and a corresponding sample circle selection category 504; in each round of training process, sample circle selection information 501 is input into a first large language model 502 to obtain a predicted circle selection category 503 output by the first large language model; further training the first large language model 502 based on the loss value between the predicted circled class 503 and the sample circled class 504; until the loss value between the predicted circle classification 503 and the sample circle classification 504 output by the first large language model 502 is smaller than the preset value, the training of the first large language model 502 is completed.
In the embodiment of the specification, in order to enable the first large language model to have the function of identifying the circling intention, sample circling information and corresponding sample circling categories are used for training the model in the early stage, so that the first large language model learns the capability of extracting the sample circling categories.
In order to enable the second large language model to generate a matched circle selection strategy in the label circle selection application process, the second large language model needs to be trained in advance, and the training process of the second large language model is exemplified in the embodiment.
FIG. 6 is a schematic flow diagram of one embodiment of the present disclosure for training a first large language model. The method may be applied to a computer device.
Illustratively, as shown in FIG. 6, the method 600 includes:
step 602, obtaining sample circling information, sample circling knowledge and a sample circling strategy, wherein the sample circling strategy is a circling strategy corresponding to the sample circling information and the sample circling knowledge.
Before training the second largest language model, first a training sample of the second largest language model is needed. The second large language model can generate the circling strategy based on the input label circling information and the target circling knowledge, a plurality of sample circling information and sample circling knowledge can be preselected and obtained, and the sample circling strategy corresponding to the sample circling information and the sample circling knowledge is artificially marked. That is, the training samples of the second large language model include a number of second sample pairs, each second sample pair including sample gating information + sample gating knowledge and its associated sample gating strategy.
Step 604, splice the sample circling information and the sample circling knowledge into sample prompting information.
Because the second large language model is a prompt model, before training the second large language model, the sample circle selection information and the sample circle selection knowledge in each second sample pair are spliced to obtain sample prompt information, and then the second large language model is trained based on the sample prompt information and the sample circle selection strategy.
And step 606, inputting the sample prompt information into the second large language model to obtain a prediction circle selection strategy output by the second large language model.
In each round of training process, sample prompt information is input into an initial second large language model, and a matched prediction circle strategy is generated by the initial second large language model under the prompt of the sample prompt information, wherein the prediction circle strategy comprises at least one prediction circle label and a corresponding prediction label value.
Step 608, training the second large language model based on the difference between the predicted round-robin strategy and the sample round-robin strategy.
If the difference between the predicted circle selection strategy and the artificially marked sample circle selection strategy is larger, the circle selection strategy generated by the second large language model is lower in accuracy, and if the difference between the predicted circle selection strategy and the artificially marked sample circle selection strategy is smaller, the circle selection strategy generated by the second large language model is higher in accuracy, and the circle selection strategy can be used in the actual circle selection application process. And in order to enable the second large language model to learn to accurately generate the circle selection strategy under the prompt of the second sample prompt information, the difference between the prediction circle selection strategy and the sample circle selection strategy is compared to serve as a loss function, so that iterative training is conducted on the second large language model until the value of the loss function is smaller than a preset value, and then the second large language model is trained.
FIG. 7 is a schematic diagram of a training process for a second largest language model provided in one or more embodiments of the present disclosure. As shown in fig. 7, prior to training the second large language model, a training data set is first acquired, the training data set including a plurality of training sample pairs, each training sample pair including sample gating information 701, sample gating knowledge 702, and a corresponding sample gating strategy 706; in each round of training process, sample circle selection information 701 and sample circle selection knowledge 702 are spliced into sample prompt information 703, and the sample prompt information 703 is input into a second large language model 704 to obtain a prediction circle selection strategy 705 output by the second large language model; further, second large language model 704 is trained based on loss values between predicted round-robin strategy 705 and sample round-robin strategy 706; until the loss value between the predicted circle strategy 705 and the sample circle strategy 706 output by the second large language model 704 is smaller than the preset value, the training of the second large language model 704 is completed.
In other possible embodiments, the first large language model and the second large language model may be trained simultaneously, that is, sample tag information is input into the first large language model to obtain a predicted circle selection category output by the second large language model; and extracting sample keywords from the sample tag information, recalling sample circling knowledge based on the prediction circling category and the sample keywords, splicing the sample circling knowledge and the sample circling information into sample prompt information according to the sample circling knowledge and the sample circling information, inputting a second large language model to obtain an output prediction circling strategy, and training the first large language model and the second large language model according to a loss value between the prediction circling strategy and the sample circling strategy. That is, the training samples during co-training are sample circling information and corresponding sample circling strategies.
In the embodiment of the specification, in order to enable the second large language model to have the generation capability of the circle selection strategy, the model is trained by using sample circle selection information, sample circle selection knowledge and sample circle selection strategy in the early stage, so that the second large language model learns the capability of generating the circle selection strategy according to the prompt information.
The first large language model and the second large language model are obtained based on limited sample training, and although the first large language model and the second large language model are converged in the early training stage, certain special samples may exist in the actual application process, so that the first large language model and the second large language model cannot output an accurate target circle selection strategy. The model performance can be updated in time based on user feedback in the due process in order to continuously improve the model performance of the target large language model.
FIG. 8 is a schematic flow chart diagram of another method of label sorting based on a large language model provided by one or more embodiments of the present disclosure. The label circling method based on the large language model provided by the embodiment of the specification can be applied to computer equipment.
Illustratively, as shown in FIG. 8, the method 800 includes:
step 802, acquiring label circling information, wherein the label circling information is used for indicating description information of an object to be circled.
And 804, processing the label circling information through the target large language model to generate a target circling strategy matched with the label circling information.
Step 806, displaying the generated target circling strategy.
Step 808, if a modification operation to the target trapping policy is received, acquiring the modified target trapping policy, and performing tag trapping based on the modified target trapping policy to obtain a target group.
The implementation of steps 802 to 808 may refer to the above embodiments, and the description of this embodiment is omitted here.
Step 810, determining a target prediction loss of the target large language model based on the difference between the target trapping strategy and the modified target trapping strategy.
Step 812, updating model parameters of the target large language model based on the target prediction loss.
When the target circling strategy generated by the target large language model is different from the target circling strategy required by the user, the user can modify the target circling strategy, the corresponding computer equipment can acquire the modified target circling strategy, and the modified target circling strategy can be used as the marking circling strategy corresponding to the label circling information for iterative optimization of the target large language model. Specifically, the optimization method may be: and determining target prediction loss of the target large language model based on the difference between the target circling strategy and the modified target circling strategy, and optimizing the target large language model based on the target prediction loss so as to update model parameters of the target large language model.
Because the target large language model comprises a first large language model and a second large language model, in one possible implementation, only the second large language model needs to be optimized, and then the second large language model is optimized by using target prediction loss between the target circling strategy and the modified target circling strategy so as to update model parameters of the second large language model; in another possible implementation, the first large language model and the second large language model are optimized together, and then the target prediction loss between the target trapping strategy and the modified target trapping strategy is used to optimize the first large language model and the second large language model together so as to update model parameters of the first large language model and the second large language model.
Optionally, in order to further improve the optimization accuracy, in the case of receiving the modification operation of the target circling strategy, the circling intention output by the first large language model, that is, the target object category indicated by the tag circling information, may be displayed, the user determines whether the circling intention is accurate in recognition, if the confirmation operation of the user on the target object category is received, it is determined that the first large language model does not need to be optimized, and only the second large language model needs to be optimized based on the target prediction loss between the target circling strategy and the modified target circling strategy, and the model parameters of the second large language model are updated; if the modification operation of the user on the target object class is received, determining that error exists in the circle selection intention identification of the first large language model, and determining that error exists in the first large language model and the second large language model; at this time, the modified target object class may also be obtained, the first large language model may be trained and optimized based on a first predictive penalty between the target object class and the modified target object class, and the second large language model may be trained and optimized based on a second predictive penalty between the target trapping strategy and the modified target trapping strategy to update model parameters of the first large language model and the second large language model.
In the embodiment of the present disclosure, in order to avoid an ineffective circling process, after generating the target circling policy, the target circling policy may also be displayed, so that a user may finally determine whether to continue to execute the subsequent circling process; if the target circling strategy is different from the user expectation, the user can directly modify the target circling strategy, so that the subsequent circling process can be performed according to the modified circling strategy; the accuracy of subsequent circle selection can be improved through simple confirmation operation; in addition, after the user modifies the circling strategy, the target large language model can be re-optimized based on the difference between the modified target circling strategy and the target circling strategy generated by the model, so that the accuracy of the subsequent large language model generating circling strategy is improved.
In other possible application scenarios, if multiple circle selection policies are generated at the same time, there may be some circle selection policies that do not meet the actual use requirement, for example, a certain tag group is not often used in the current application scenario, or there is no subsequent analysis value. In order to make the circle selection strategy finally recommended to the user more in line with the requirements, after the circle selection strategy is generated, the circle selection strategy is also required to be screened according to a preset rule.
FIG. 9 is a schematic flow diagram of another method of label containment based on a large language model provided in one or more embodiments of the present disclosure. The label circling method based on the large language model provided by the embodiment of the specification can be applied to computer equipment.
Illustratively, as shown in FIG. 9, the method 900 includes:
In step 902, tag circling information is obtained, where the tag circling information is used to indicate description information of an object to be circled.
The implementation of step 902 may refer to the above embodiments, which are not described herein.
And 904, processing the label circling information through the target large language model to generate at least two candidate circling strategies.
In a possible implementation manner, after the computer device acquires the tag circle selection information, the first large language model in the target large language model is used for carrying out intention extraction on the tag circle selection information to obtain a target object category to which the object to be circled belongs; extracting keywords from the tag circle selection information to obtain target keywords so as to recall target circle selection knowledge (tag knowledge and circle selection strategy knowledge) from a tag knowledge base and a sample knowledge base based on the target keywords and the target object categories respectively; and then splicing the label circling information and the target circling knowledge to obtain target prompt information, inputting the target prompt information into a second large language model in the target large language model, and generating a circling strategy to obtain at least two candidate circling strategies.
Step 906, determining candidate use heat of each candidate circle selection strategy in the target circle selection scene.
In the case of generating multiple similar candidate gating strategies, there may be gating strategies that satisfy the tag gating information, but gating strategies that are less frequently used or that would not normally be employed in the current gating scenario; then the target gating strategy to be finally executed is the gating strategy to be adopted by the real gating process; in a possible implementation manner, candidate use heat of each candidate circle selection strategy in the target circle selection scene is also determined, and then the target circle selection strategy is selected according to the candidate use heat.
Alternatively, the manner of determining the candidate usage heat may be: a heat prediction model is pre-established and is used for predicting the use heat of the input candidate circle selection strategies so as to obtain the candidate use heat of each candidate circle selection strategy. The heat prediction model is constructed based on a plurality of historical circle selection strategies and can be used for analyzing the use heat of the candidate circle selection strategies in the historical circle selection strategies.
Step 908, determining the candidate circle selection strategy with the highest candidate use heat as the target circle selection strategy.
The higher the candidate use heat is, the higher the use frequency of the candidate circle selection strategy in the target circle selection scene is, and the more likely the candidate circle selection strategy is actually adopted in the circle selection process; and determining the candidate circle selection strategy with the highest candidate use heat as the target circle selection strategy.
Alternatively, in other possible embodiments, a policy rule may be preset, where the policy rule defines a condition feature of an available circling policy, and filters each candidate circling policy based on the policy rule, and determines a candidate circling policy passing through the policy rule as a target circling policy.
Optionally, if the tag circling information indicates the number of policies for generating the circling policy, the candidate circling policy corresponding to the number of policies is directly determined as the target circling policy.
In step 910, label circling is performed based on the circled label and the label value, so as to obtain a target group, where the target group includes at least one target circled object indicated by label circling information.
FIG. 10 is a schematic diagram of a process for generating a round-robin strategy using a large language model according to an embodiment of the present application. As shown in fig. 10, in the intention recognition stage, after the input description information (tag circle selection information) of the object to be circled is obtained, circle selection intention recognition is performed on the object to be circled based on LLM (first large language model), if the intention is clear, it can be recognized that the circle selection intention (target object category) is at least one of crowd circle selection, commodity circle selection, merchant circle selection and analysis insight, if the intention is not clear, prompt information is displayed to inquire the user about the circle selection intention; in the label retrieval stage, extracting target keywords from the description information, and carrying out similar label and history sample recall from a pre-constructed knowledge base by using M3E based on the target keywords and the circle selection intention to obtain target circle selection knowledge; if the similar label can be recalled, the subsequent label combination generation stage is continuously executed, and if the similar label is not recalled, the label requirement (namely label ring selection information) is collected again; in the label combination stage, splicing the description information, the recalled similar labels and the historical samples to construct target prompt information, and inputting the target prompt information into LLM (second large language model) to generate label combination (namely target circle selection strategy); in the group generation stage, firstly judging whether a target circle selection strategy meets a preset rule, and if so, generating a group based on the target circle selection strategy; if the rule requirement is not met, performing rule tuning (or modification) on the target circling strategy, and generating a group based on the tuned or modified circling strategy; in an automated insight phase, generating a group report by analyzing individual objects in the group; alternatively, the group report may be presented in a visual icon. Specifically, in the knowledge base construction stage, original tag data (or information) of users, merchants and commodities, such as tag names, tag descriptions and tag values, can be obtained and processed into a structured tag base; and the history circling strategy and the corresponding label combination association can be acquired to be a history sample so as to obtain a structured sample library.
The implementation of step 910 may refer to the above embodiments, which are not described herein.
In the embodiment of the specification, considering that a plurality of candidate circle selection strategies can be generated at the same time, the candidate circle selection strategies can be screened based on the candidate use heat of the candidate circle selection strategies, so that the finally determined target circle selection strategies not only accord with label circle selection information, but also accord with the requirements under the target circle selection scene, the circle selection strategies which cannot be adopted in the real circle selection process are avoided being output, and the determination accuracy of the target circle selection strategies is improved.
FIG. 11 is a schematic structural diagram of a label sorting apparatus based on a large language model according to one or more embodiments of the present disclosure. The device is applied to computer equipment.
Illustratively, as shown in FIG. 11, the apparatus 1100 includes:
the first obtaining module 1102 is configured to obtain tag circling information, where the tag circling information is used to indicate description information of an object to be circled;
A first generating module 1104, configured to process the tag circling information through a target large language model, and generate a target circling policy that is matched with the tag circling information, where the target circling policy includes at least one circling tag of the object to be circled and a tag value corresponding to the circling tag;
The first circling module 1106 is configured to perform label circling based on the circling label and the label value, so as to obtain a target group, where the target group includes at least one target circling object indicated by the label circling information.
Optionally, the first generating module 1104 is further configured to:
Inputting the label circling information into a first large language model to identify the circling intention, and determining the target object category of the object to be circled indicated by the label circling information;
carrying out knowledge recall based on the target object category to obtain at least one target circling knowledge, wherein the target circling knowledge is used for indicating label information and circling strategy information related to the object to be circled;
and inputting the target circling knowledge and the tag circling information into a second large language model, and generating the target circling strategy matched with the tag circling information.
Optionally, the first generating module 1104 is further configured to:
splicing the target circling knowledge and the label circling information to obtain target prompt information;
And inputting the target prompt information into the second large language model to obtain the target circling strategy output by the second large language model.
Optionally, the first generating module 1104 is further configured to:
Extracting keywords from the label circling information to obtain target keywords, wherein the target keywords are words related to the object to be circled;
And carrying out knowledge recall based on the target keywords and the target object categories to obtain at least one target circling knowledge.
Optionally, the first generating module 1104 is further configured to:
Selecting candidate tag knowledge under the target object category from a tag knowledge base, wherein tag information under each candidate object category is stored in the tag knowledge base;
Matching is carried out from the candidate tag knowledge based on the target keyword, so that first circle selection knowledge is obtained;
Matching is carried out from a sample knowledge base based on the target keywords, so as to obtain second circle selection knowledge, wherein the corresponding relation between history circle selection information and history circle selection strategies is stored in the sample knowledge base;
and determining the first round selection knowledge and the second round selection knowledge as recalled target round selection knowledge.
Optionally, the first generating module 1104 is further configured to:
processing the tag circling information through the target large language model to generate at least two candidate circling strategies;
Determining candidate use heat of each candidate circle selection strategy in a target circle selection scene;
And determining the candidate circle selection strategy with the highest candidate use heat as the target circle selection strategy.
Optionally, the first round selection module 1106 is configured to:
Displaying the generated target circling strategy;
If a confirmation operation of the target circling strategy is received, performing label circling based on the target circling strategy to obtain the target group;
the apparatus further comprises:
And the second circling module is used for acquiring the modified target circling strategy if receiving the modification operation of the target circling strategy, and performing label circling based on the modified target circling strategy to obtain the target group.
Optionally, the apparatus further comprises:
a first determining module configured to determine a target prediction loss of the target large language model based on a difference between the target trapping policy and the modified target trapping policy;
and the updating module is used for updating the model parameters of the target large language model based on the target prediction loss.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring sample circling information and sample circling categories indicated by the sample circling information;
The second determining module is used for inputting the sample circling information into the first large language model to identify the circling intention, so as to obtain a predicted circling category output by the first large language model;
And the first training module is used for training the first large language model based on the difference between the predicted circle classification and the sample circle classification.
Optionally, the apparatus further comprises:
The third acquisition module is used for acquiring sample circling information, sample circling knowledge and a sample circling strategy, wherein the sample circling strategy is a circling strategy corresponding to the sample circling information and the sample circling knowledge;
The splicing module is used for splicing the sample circle selection information and the sample circle selection knowledge into sample prompt information;
the third determining module is used for inputting the sample prompt information into the second large language model to obtain a prediction circle selection strategy output by the second large language model;
And the second training module is used for training the second large language model based on the difference between the prediction circle selection strategy and the sample circle selection strategy.
The embodiments of the present specification provide a computer apparatus including a processor and a memory, the memory storing at least one program, the at least one program loaded and executed by the processor to implement a large language model based label sorting method as provided in the above-described alternative embodiment. Alternatively, the computer device may be a label-looping platform.
FIG. 12 is a schematic diagram of a computer device provided in one or more embodiments of the present disclosure.
Illustratively, as shown in FIG. 12, the computer device 1200 includes: a memory 1201, a processor 1202, and a computer program 1203 stored in the memory 1201 and running on the processor 1202, wherein the processor 1202, when executing the computer program 1203, causes the computer device to perform any of the large language model based label sorting methods described above.
One or more embodiments of the present disclosure may divide functional modules of a computer device according to the above method examples, for example, each functional module may correspond to one functional module, or two or more functions may be integrated into one processing module, where the integrated modules may be implemented in hardware. It should be noted that, in one or more embodiments of the present disclosure, the division of modules is merely a logic function division, and other division manners may be implemented in practice.
In the case where respective functional modules are divided with corresponding respective functions, the computer apparatus may include: the device comprises a first acquisition module, a first generation module, a first circle selection module and the like. It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein.
The computer device provided by one or more embodiments of the present disclosure is configured to perform the tag looping method based on a large language model, so that the same effects as those of the implementation method can be achieved.
In case of an integrated unit, the computer device may comprise a processing module, a memory module. The processing module can be used for controlling and managing the actions of the computer equipment. The memory module may be used to support computer devices in executing inter-program code and data, etc.
Wherein a processing module may be a processor or controller that may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with one or more embodiments disclosed herein. A processor may also be a combination of computing functions, including for example one or more microprocessors, digital Signal Processing (DSP) and microprocessor combinations, etc., and a memory module may be a memory.
The computer device provided in one or more embodiments of the present description may be a chip, a component, or a module, in particular, and may include a processor and a memory connected to each other; the memory is used for storing instructions, and when the computer device runs, the processor can call and execute the instructions so that the chip executes any tag circling method based on the large language model.
One or more embodiments of the present specification provide a computer-readable storage medium having instructions stored therein that, when executed on a computer or processor, cause the computer or processor to perform any of the large language model-based label sorting methods described above.
One or more embodiments of the present specification also provide a computer program product comprising instructions that, when executed on a computer or processor, cause the computer or processor to perform the above-described related steps to implement any of the large language model-based label sorting methods described above.
The computer device, the computer readable storage medium, the computer program product or the chip containing the instructions provided in one or more embodiments of the present disclosure are used to execute the corresponding tag population method based on the large language model, so that the benefits achieved by the method can be referred to as benefits in the corresponding method provided above, and are not repeated herein.
It will be appreciated by those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In one or more embodiments of the present disclosure, it should be understood that the disclosed apparatus and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The foregoing is merely illustrative of one or more embodiments of the present disclosure, and the scope of the one or more embodiments of the present disclosure is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the one or more embodiments of the present disclosure. Therefore, the scope of one or more embodiments of the present disclosure shall be determined by the claims.