Disclosure of Invention
The disclosure provides a content recommendation method, a content recommendation device, a content recommendation system, an electronic device, a computer readable storage medium and a computer program product, which at least solve the problems that a recommendation system in the related art excessively depends on user history data to increase the use cost of a user, and the accuracy of recommended content is low and the loss of the user is easy to increase due to the fact that the user cannot intervene in a recommendation model. The technical scheme of the present disclosure is as follows:
According to a first aspect of embodiments of the present disclosure, a content recommendation method is provided, which includes receiving a model selection operation of a first user, selecting at least one first recommendation model for presentation, merging at least one first recommendation model corresponding to a model merging instruction into a current recommendation model in response to the model merging instruction of the first user, so as to generate a first merging model, and determining target recommendation content for the first user based on the obtained first merging model in response to a merging completion instruction of the first user.
In one exemplary embodiment of the disclosure, the method further includes removing the combined recommendation model in the first combined model to generate a second combined model in response to a model removal instruction of the first user, and determining target recommendation content for the first user based on the obtained second combined model in response to a removal completion instruction of the first user.
In an exemplary embodiment of the disclosure, the method further includes the step that the first recommendation model is a recommendation model corresponding to a second user, and the second user is a user who opens a model sharing option.
In an exemplary embodiment of the disclosure, receiving a model selection operation of a first user, selecting at least one first recommended model for presentation, and further comprising receiving the model selection operation of the first user based on a model selection area, determining a model keyword corresponding to the model selection operation, and selecting at least one first recommended model from a model library for presentation on the interactive interface according to the model keyword.
In one exemplary embodiment of the disclosure, determining target recommended content for the first user based on the obtained first merging model includes determining model weights of the recommendation models in the first merging model, determining content acquisition weights of the first user for the recommendation models according to the model weights, and distributing corresponding content request numbers for the recommendation models according to the content acquisition weights to determine the target recommended content according to the content request numbers.
In one exemplary embodiment of the disclosure, determining the model weight of each of the recommendation models in the first merge model includes obtaining historical recommendation data for each of the recommendation models in the first merge model, the historical recommendation data including a plurality of recommendation influencing factors, determining factor weights for each of the recommendation influencing factors based on the historical recommendation data, and performing a weighted calculation based on the plurality of factor weights to determine the model weights.
In one exemplary embodiment of the disclosure, determining the model weight of each recommended model in the first merging model includes determining a content tag of each recommended model in the first merging model, determining a model priority of each recommended model according to the content tag, and determining a model weight corresponding to each recommended model according to the model priority.
In one exemplary embodiment of the present disclosure, the method further includes determining an associated user associated with the first user, generating an associated user group based on the first user and the associated user, determining an associated group recommended content corresponding to the associated user group in response to a model training operation of any user in the associated user group, and presenting the associated group recommended content to any user in the associated user group.
According to a second aspect of the embodiment of the present disclosure, a content recommendation system is provided, which includes a user side configured to provide a model operation interface and display target recommended content, receive a model operation performed by a user based on the model operation interface, the model operation includes a model selection operation and a model removal operation and a model sharing operation, a model training side configured to merge a first recommended model determined based on the model selection operation into a current recommended model to generate a first merged model, and remove at least one recommended model in the first merged model according to the model removal operation of the first merged model to generate a second merged model, and a content recommendation platform configured to determine target recommended content to the user according to the merged model and send the target recommended content to the user side.
According to a third aspect of embodiments of the present disclosure, there is provided a content recommendation apparatus including a model presentation module configured to perform a model selection operation of receiving a first user and select at least one first recommendation model for presentation, a model merging module configured to perform a model merging instruction in response to the first user and merge at least one first recommendation model corresponding to the model merging instruction into a current recommendation model to generate a first merged model, and a first content recommendation module configured to perform a merging completion instruction in response to the first user and determine a target recommended content for the first user based on the obtained first merged model.
In one exemplary embodiment of the present disclosure, the content recommendation apparatus further includes a second content recommendation module configured to execute, in response to a model removal instruction of the first user, removing the recommendation models that have been combined in the first combination model to generate a second combination model, and in response to a removal completion instruction of the first user, determining a target recommendation content for the first user based on the obtained second combination model.
In an exemplary embodiment of the disclosure, the model presentation module further includes a model presentation unit configured to perform receiving a model selection operation performed by the first user based on a model selection area, determining a model keyword corresponding to the model selection operation, and selecting at least one first recommended model from a model library to be presented on the interactive interface according to the model keyword.
In one exemplary embodiment of the present disclosure, a first content recommendation module includes a first content recommendation unit configured to perform determining a model weight for each of the recommendation models in the first merge model, determining a content acquisition specific gravity of the first user for each of the recommendation models according to each of the model weights, and assigning a corresponding number of content requests to each of the recommendation models according to the content acquisition specific gravity to determine the target recommended content according to the number of content requests.
In one exemplary embodiment of the present disclosure, a first content recommendation unit includes a first weight determination subunit configured to perform acquiring historical recommendation data of each of the recommendation models in the first merge model, the historical recommendation data including a plurality of recommendation influence factors, determining factor weights for each of the recommendation influence factors from the historical recommendation data, and performing a weighting calculation based on the plurality of factor weights to determine the model weights.
In one exemplary embodiment of the present disclosure, the first content recommendation unit includes a second weight determination subunit configured to perform determining a content label of each of the recommendation models in the first merge model, determining a model priority of each of the recommendation models according to the content label, and determining a model weight corresponding to each of the recommendation models according to the model priority.
In one exemplary embodiment of the present disclosure, the content recommendation device further includes a third content recommendation module configured to perform determining an associated user associated with the first user, generating an associated user group based on the first user and the associated user, determining an associated group recommended content corresponding to the associated user group in response to a model training operation of any user in the associated user group, and displaying the associated group recommended content to any user in the associated user group.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising a processor, a memory for storing processor executable instructions, wherein the processor is configured to execute the instructions to implement the content recommendation method of any one of the above.
According to a fifth aspect of the present disclosure, there is provided a computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform any one of the content recommendation methods described above.
According to a sixth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program/instruction, characterized in that the computer program/instruction, when executed by a processor, implements the content recommendation method of any one of the above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
According to the content recommendation method, on one hand, a user can autonomously select a recommendation model to perform model training, and the accuracy of recommendation content distribution is improved in an autonomous and rapid recommendation model learning mode. On the other hand, the user does not belong to the non-perception state of the recommendation model any more, the capability of autonomous control of recommendation algorithm model learning is provided for the user, and the control feeling of the user can be greatly improved. On the other hand, the self-help of the user carries out the fast learning recommendation algorithm, so that the adaptation cost of the cold start user is greatly reduced, and the loss rate of the related user can be greatly reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Existing algorithmic recommendation systems may include collaborative filtering systems, content-based recommendation systems, and hybrid recommendation model systems, among others. Two main types of recommendation systems may include content-based recommendation systems and collaborative filtering systems. Collaborative filtering systems may build models based on historical user behavior, such as content that has been browsed, praised, reviewed by a user, in conjunction with similar decisions by other users, which models may be used to predict what content a user may be interested in or how interesting a user is to the content. Content-based recommendation systems may utilize some discrete features about content to recommend similar content having similar properties.
However, the recommendation algorithm adopted at present is very dependent on the historical data of the user as a judgment basis, so that it is difficult for the user with 'cold start' and 'preference changed' to judge what the real demand is, and the recommendation algorithm can only continuously collect the use habit of the user to train the algorithm model, thereby increasing the use cost of the user and increasing the loss possibility of the user. For users, the algorithm training is not felt by the users, but the influence on the content seen by the users is very large, many users do not know how the recommended algorithm model is formed, the users cannot intervene on the model, and the control feeling of the users on the model is weak.
In the following, two user groups are taken as examples, and the content seen by the user is completely recommended according to a cold-start content model for the initial user of a certain platform, and for the user, the content is probably not the content which the user wants to see, but the user cannot be involved in control. For users of the conversion platform, the content which is started in a cold way is quite popular in the past, but for the users of the conversion platform, the content which is quite popular in the past is quite easy to cause the user to lose under the condition that the current content is quite homogeneous.
Based on this, according to an embodiment of the present disclosure, a content recommendation method, a content recommendation apparatus, a content recommendation system, an electronic device, a computer-readable storage medium, and a computer program product are proposed.
Fig. 1 is a flowchart illustrating a content recommendation method according to an exemplary embodiment, and as shown in fig. 1, the content recommendation method may be used in a computer device, wherein the computer device described in the present disclosure may include a mobile terminal device such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), and a fixed terminal device such as a desktop computer. The present exemplary embodiment is illustrated with the method applied to a computer device, and it is understood that the method may also be applied to a server, and may also be applied to a system including a computer device and a server, and implemented through interaction of the computer device and the server. The method specifically comprises the following steps.
In step S110, a model selection operation of the first user is received, and at least one first recommendation model is selected for presentation.
In some example embodiments of the present disclosure, the first user may be a user who is autonomously performing recommendation model training operations, e.g., the first user may be a new user who has just begun to use a certain recommendation platform. The model selection operation may be a user operation of the first user selection recommendation model. The first recommendation model may be a recommendation model available for selection by the first user.
For a certain recommendation platform, if a first user does not interest in the current recommended content, and wants to change the content recommended to the user by the recommendation platform, the first user can autonomously perform training operation of the recommendation model, and at this time, the first user can select one or more first recommendation models to display in a user interaction interface. For example, the first user may be a new user who has recently completed the platform registration operation of the recommendation platform, and the first user may also be a user who wants to update the relevant recommendation content in the recommendation platform. Specifically, the first user may acquire a recommendation model of the other user as the first recommendation model through a user homepage of the other user. The first user may also select a specific user from the personal user homepage, and use a recommendation model of the specific user as the first recommendation model. When a model selection operation of the first user is received, the user interaction interface can display the selected first recommendation model.
In step S120, at least one first recommendation model corresponding to the model merging instruction is merged to the current recommendation model in response to the model merging instruction of the first user, so as to generate a first merged model.
In some exemplary embodiments of the present disclosure, the model merging instruction may be a processing instruction corresponding to model merging operations of a plurality of recommended algorithm models. The current recommendation model may be a recommendation algorithm model to which the first user initially corresponds. The first merging model may be a recommendation model obtained after model merging training is performed on the first recommendation model and the current recommendation model.
When the first user performs the model selection operation, one or more first recommendation models are displayed in the user interaction interface, at this time, the first user can select other recommendation models which want to be combined and trained with the current recommendation model, and when the model combination operation of the first user is received, a corresponding model combination instruction is generated according to the model combination operation. And the recommendation platform responds to the model merging instruction of the first user, merges one or more first recommendation models corresponding to the model merging instruction into the current recommendation model, and enables the first recommendation model and the current recommendation model to carry out model merging training so as to generate a first merging model.
For example, when the current recommendation model corresponding to the first user is "recommendation algorithm 1", and the first user selects the recommendation algorithm models of other users, such as "recommendation algorithm 2", as the first recommendation model, the "recommendation algorithm 2" and the "recommendation algorithm 1" may be combined and trained to obtain a model after the combination training, i.e. the first combination model "recommendation algorithm 1+recommendation algorithm 2".
In step S130, in response to the merging completion instruction of the first user, a target recommended content is determined for the first user based on the obtained first merging model.
In some exemplary embodiments of the present disclosure, the merge completion instruction may be an instruction corresponding to a plurality of recommended algorithm model completion models after merge training. The target recommended content may be recommended content that is pushed by the recommendation platform to the first user according to the merging model, for example, the target recommended content may be recommended content determined according to the first merging model, and the target recommended content may include various types of information content such as video content, graphics content and the like.
After the first user selects the first recommendation model, the first recommendation models can be merged into the current recommendation model one by one, and when the first user merges all the selected first recommendation models into the current recommendation model, a merging completion instruction is generated. Responding to a merging completion instruction of a first user, carrying out model merging training based on a first recommending model and a current recommending model by a recommending platform to obtain a first merging model, and determining target recommending content for the user according to the obtained first merging model.
For example, when the recommended content corresponding to the "recommendation algorithm 1" is sports content and the recommended content corresponding to the "recommendation algorithm 2" is music content, after the model merging process, the obtained first merging model may push the recommended content of "sports+music" for the first user, that is, the target recommended content.
According to the content recommendation method in the embodiment of the invention, on one hand, a user can autonomously select a recommendation model to perform model training, and the accuracy of recommendation content distribution is improved by means of autonomous rapid learning of the recommendation model. On the other hand, the user does not belong to the non-perception state of the recommendation model any more, the capability of autonomous control of recommendation algorithm model learning is provided for the user, and the control feeling of the user can be greatly improved. On the other hand, the self-help of the user carries out the fast learning recommendation algorithm, so that the adaptation cost of the cold start user is greatly reduced, and the loss rate of the related user can be greatly reduced.
Next, a content recommendation method in the present exemplary embodiment will be further described.
In one exemplary embodiment of the present disclosure, a combined recommendation model in a first combined model is removed in response to a model removal instruction of a first user to generate a second combined model, and target recommendation content is determined for the first user based on the obtained second combined model in response to a removal completion instruction of the first user.
The model removal instruction may be an operation instruction to remove the specified recommendation model from the first merge model. The second merge model may be a recommendation model obtained by removing one or more merged recommendation models in the first merge model by a model removal instruction. Only one recommended algorithm model may be included in the second merge model. The target recommended content may also be recommended content that the recommendation platform pushes to the first user based on the second merge model.
When the recommendation platform adopts the first merging model to recommend content to the first user, if the first user wants the recommendation platform to not continue pushing certain types of recommended content to the first user, one or more merged recommendation models in the first merging model can be removed from the first merging model through a model removing operation. When the first user performs the model removing operation, a corresponding model removing instruction is generated, and the recommendation platform responds to the model removing instruction and can remove the recommendation model which is combined in the first combination model from the first combination model to obtain a second combination model. When the first user can remove the plurality of merged recommendation models from the first merged model through a plurality of model removing operations, corresponding removing completion instructions can be generated when the first user completes the model removing operations. In response to the removal completion instruction of the first user, target recommended content may be determined for the first user based on the generated second merge model.
For example, after multiple model merging training by the first user, the current first merging model pushes recommended content such as music, video, sports, lovely pet, etc. to the first user. At this time, when the first user no longer wants to receive the recommended content of the movie and the lovely pet, the recommended algorithm models respectively corresponding to the movie and the lovely pet can be removed from the first merging model through the model removing instruction, and then the second merging model currently corresponding to the first user only comprises the recommended algorithm model of the music and the sport, and at this time, the target recommended content can be determined for the first user according to the recommended algorithm model of the music and the sport. The matching degree of the recommended content can be further improved by the related processing operation in response to the model removal instruction.
In an exemplary embodiment of the disclosure, the first recommendation model is a recommendation model corresponding to a second user, and the second user is a user who opens a model sharing option.
The second user may be a user having a certain association with the first user, for example, the second user may be a user in a friend list of the first user, the second user may also be a user focused by the first user, and the second user may also be a user (such as a network red user) with a top focus in the recommendation platform. The recommendation model corresponding to the second user may be a recommendation algorithm model used by the recommendation platform to push recommended content for the second user. The model sharing option can be a configuration option for sharing the recommendation algorithm model used by the user to other users for model merging training.
In order to solve the problem that in the existing content recommendation scheme, in the model training learning process, the historical data of the user needs to be continuously analyzed to perform model training, so that the model optimization period is long, the present disclosure provides an active training content recommendation scheme, and specifically, a recommendation platform can provide a configuration item, namely a model sharing option, for the user to autonomously select whether to open a content recommendation calculation model used by the user. If a user selects the option of opening model sharing, other users can autonomously choose to learn the user's recommendation model to train their own algorithmic recommendation model. Referring to fig. 2, fig. 2 is an interface diagram illustrating a user-initiated model sharing option according to an exemplary embodiment. For example, a first user in the recommendation platform may select an open model sharing option in the "privacy settings" 220 of the "personal center" section by clicking on the control 210 in his own "personal center" page, such as closing the switch 230 of the "do not open My recommendation algorithm to others" configuration item. The switch of the "do not open my recommendation algorithm to others" configuration item is normally turned on by default in the recommendation platform, and if the first user turns the option off, the switch of the configuration item will be converted to the display style of control 240. After a certain user opens the model sharing option, other users in the recommendation platform can acquire the recommendation model of the user to perform model training.
When the first user performs model selection operation through the second user, the first user can determine the second user from a friend list or a concerned list of the first user, and the first user can select some users with higher concerned degree as the second user through the recommendation platform. If the first user wants to acquire the recommendation model of the second user, the second user must be the user who has opened the model sharing option in the recommendation platform, otherwise the first user cannot acquire the recommendation model of the user. At this time, the first user may perform a model selection operation through the second user, and when the first user selects the recommendation model of the second user, the obtained recommendation model of the second user may be used as the first recommendation model, and the first recommendation model may be displayed.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a first user selecting a recommendation model for model training based on a second user, according to an example embodiment. The first user may enter the own friend list through the personal center, select a certain user from the friend list as the second user, and acquire the recommendation model of the second user, for example, by clicking the "acquire recommendation algorithm" control 320, so as to acquire the recommendation model of the second user for display. After the first user completes the model merging operation, in response to the user merging completion instruction, the recommendation platform performs a processing procedure of model merging training, and at this time, the user interaction interface displays a model training prompt message 330 of "please wait in the process of recommending algorithm model learning". When the first recommendation model selected by the first user and the current recommendation model are combined and trained, the user interface displays update completion prompt information 340 such as "you can please, model update is completed".
In an exemplary embodiment of the disclosure, a model selection operation performed by a first user based on a model selection area is received, a model keyword corresponding to the model selection operation is determined, and at least one first recommendation model is selected from a model library according to the model keyword and displayed on an interactive interface.
The model selection area may be an operation area provided by the recommendation platform to the first user for model selection. Model keywords may be keywords employed for determining a recommendation model. The model library may be a database for storing recommendation models.
The interactive interface of the recommendation platform can also provide a model selection area for the first user, and the first user can perform model selection operation through the model selection area. Referring to fig. 4, fig. 4 is a schematic diagram illustrating a first user selecting a recommendation model for model training through a user interaction interface according to an exemplary embodiment. As can be seen in fig. 4, the first user may trigger the display model selection area 420 through an "add model" control 410 in the user interaction interface. The first user may select one or more recommendation models directly from model selection area 420, e.g., display "sports," "lovely pet," "cartoon," "movie," etc., recommendation models in model selection area 420 for selection by the first user.
In addition, the model selection area also provides an input area, and the first user can also perform model selection operation by inputting characters. When the model selection operation of the first user is finished, the recommendation platform may determine model keywords in the model selection operation performed by the first user, for example, the model keywords may include "lovely pet", "table tennis", "delicious food", "rap singer", and the like. The model selection area 430 displays the model keywords selected after the model selection operation of the first user is finished, and after the model keywords are determined, the recommendation platform displays the corresponding recommendation models selected from the model library on the interactive interface, so that the first user selects one or more recommendation models to perform the model merging operation. After the first user completes the model merging operation, in response to the merging completion instruction of the first user, the recommendation platform performs a model merging training process, and at this time, a model training prompt message 440 of "please wait slightly in the process of recommending algorithm model learning" may be displayed in the user interface. By providing the model selection area, the autonomy of the first user for selecting the recommendation model and the control sense aiming at the recommendation model can be further improved, meanwhile, the adaptation cost of the user is reduced, and the loss rate of the user is reduced.
In one exemplary embodiment of the present disclosure, a model weight of each recommendation model in a first merge model is determined, a content acquisition specific gravity of a first user for each recommendation model is determined according to each model weight, a corresponding number of content requests is allocated to each recommendation model according to the content acquisition specific gravity, and a target recommendation content is determined according to the number of content requests.
The model weight may be an importance degree of each recommended model in the first combined model. The content acquisition specific gravity may be a specific gravity according to which the recommendation platform acquires recommended content according to the recommendation model. The number of content requests may be the number of requests corresponding to the recommendation platform pushing recommended content to the first user.
After the first merging model is determined, model weights corresponding to the recommendation models in the first merging model can be further determined, so that the first user can determine the content acquisition proportion of the recommendation models in the first merging model according to the determined model weights. For example, when the first merge model includes four recommendation models, model weights corresponding to the four recommendation models are determined respectively, and since the recommendation platform may include a large number of recommendation models, the model weights of the recommendation models may be determined according to the historical recommendation effect data, and thus, the sum of the model weights of the four recommendation models may not be 1. At this time, the calculation of folding may be performed according to the model weights of the four recommendation models, so as to determine the content acquisition specific gravity of the first user corresponding to each recommendation model, for example, the determined content acquisition specific gravity may be 0.5,0.3,0.1,0.1, at this time, if the recommendation platform needs to push one hundred recommended content to the first user, the content request amounts corresponding to the four recommendation models are 50,30,10,10 respectively. For another example, when the model weights corresponding to the four recommendation models are the same, the number of content requests corresponding to each recommendation model is 25. The accuracy of the determined recommended content can be further improved by determining the final number of content requests by the model weights.
In one exemplary embodiment of the present disclosure, historical recommendation data for each recommendation model in a first merge model is obtained, the historical recommendation data includes a plurality of recommendation influence factors, factor weights for each recommendation influence factor are determined from the historical recommendation data, and weighting calculations are performed based on the plurality of factor weights to determine model weights.
The historical recommendation data may be recommendation effect data generated when the recommendation model is used for recommending the content. The recommendation influencing factors may be related factors influencing the recommendation effect. For example, recommendation influencing factors may include the user's viewing duration for the video, frequency of interaction, amount of viewing, number of comments, number of endorsements, and so forth. The factor weight may be a weight corresponding to each recommendation influence factor.
When calculating the model weight of the recommendation model, the historical recommendation data of each recommendation model in the first combined model can be acquired first, and the historical recommendation data can comprise relevant data of all recommendation influence factors, for example, corresponding recommendation contents can be determined according to each recommendation model, and a user can view, comment and forward the recommendation contents, so that relevant data such as viewing duration, interaction frequency, viewing number, praise number, comment number and the like can be generated. When determining the model weight, the factor weight corresponding to each recommendation influence factor can be determined first, for example, the watching time length and the watching time length have larger influence on the recommendation effect, so that the watching time length and the watching time length can be configured with larger weight values, the influence on the recommendation effect by the comment number, the praise number, the interaction frequency and the like is smaller, and the relatively smaller weight values can be correspondingly configured. After determining the factor weight of each recommendation influence factor, weighting calculation can be performed according to all recommendation influence factors and the corresponding factor weights, and finally the corresponding model weights are determined.
In one exemplary embodiment of the present disclosure, a content label of each recommendation model in a first merge model is determined, a model priority of each recommendation model is determined according to the content label, and a model weight corresponding to each recommendation model is determined according to the model priority.
The content tag may be a tag of a recommended content corresponding to the recommendation model, for example, the content tag may be a game, a fitness, a movie, a fun, a variety, etc. The model priority may be a priority employed when content recommendation is performed using a recommendation model.
When determining the model weight of each recommendation model in the first merging model, the content label of each recommendation model can be obtained, the recommendation platform can set default priority for each content label, and then the model priority of each recommendation model is determined according to the content label. For example, the recommendation platform may set content tags such as "fitness", "movie", "fun", etc. to a higher priority. After determining the model priorities of different recommendation models, the model priorities can be ranked, and the model weights of the recommendation models are determined according to the ranking result. In addition, in the process of recommending the content by the recommendation platform, the first user can change the user preference setting, for example, the first user can reset the preference to the content such as variety and game, and the recommendation platform can change the model priority of the corresponding recommendation model and recommend the content according to the changed model priority. The merge model will continue to order and filter the recommended content to improve the accuracy of the recommended content.
Further, when a user initially uses a certain platform, initial model weights of all recommendation models are the same, and at this time, final model weights of all recommendation models can be determined according to the sequence of label priorities of default content labels in the system. The default tag priority may be determined by the recommendation platform based on user preferences of most users, or may be determined based on a selection order in which users select a recommendation model. For example, there are three recommendation models in the first merge model, the label priorities of the three recommendation models are respectively high, medium and low, and the model weights of the three recommendation models can be respectively configured to be 50%, 30% and 20%.
In one exemplary embodiment of the present disclosure, an associated user associated with a first user is determined, an associated user group is generated based on the first user and the associated user, an associated group recommended content corresponding to the associated user group is determined in response to a model training operation of any user in the associated user group, and the associated group recommended content is presented to any user in the associated user group.
The associated user may be another user having a specific association relationship with the first user. An associated user group may be a set of users that are made up of multiple users that have a particular association. The model training operation may be an operation in which any user selects a recommended model for merge training. The affinity group recommended content may be recommended content that the recommendation platform pushes to the affinity group.
For the first user, an associated user with a specific association relationship with the first user can be determined from the recommendation platform, and the first user and the associated user are used as the same group of users to generate a corresponding associated user group. If any user in the associated user group performs model training operation, such as selecting one or more recommendation models for merging training operation, determining associated group recommendation content pushed to the associated user group according to the model training operation, and displaying the associated recommendation content to all users in the associated group. Referring to fig. 5, fig. 5 is an interface diagram illustrating a first user and other users forming an associated user group according to an exemplary embodiment. The first user may enter the friend interface through the operation control 510 of "personal center", and the friend interface displays the friend list 520 corresponding to the first user. When the first user wants to use the friend 1 as the associated user to generate the associated user group together, the interaction button 521 corresponding to the friend 1 is clicked, at this time, the associated friend operation interface 530 is displayed, and prompt information of whether to add the friend 1 as the associated user is displayed in the associated friend operation interface 530, for example, "add as the associated friend", and the first user can form an association relationship with the friend 1 by clicking the "confirm" control 531, so as to generate the corresponding associated user group. At this time, all users in the associated user group will share the recommendation model, at this time, a prompt message 540 of "please wait for the recommendation algorithm model learning" may be displayed to the first user, and after the model merging training is finished, the recommendation platform will push the same target recommendation content to any user in the associated user group.
For example, in a content pushing scenario, if two accounts are bound as lover accounts to obtain recommended content, the two accounts will be used as users in the same associated user group, when any one user of the associated user group performs model training operation, corresponding recommended content will be generated according to the corresponding model training operation, and the generated recommended content is respectively pushed to any one user of the lover accounts, and the recommended content of the two accounts can be always kept consistent. By providing the function of the associated user group, the user interaction in the recommendation platform can be improved, and the user viscosity is effectively improved.
In summary, the content recommendation method includes receiving a model selection operation of a first user, selecting at least one first recommendation model for display, merging at least one first recommendation model corresponding to a model merging instruction into a current recommendation model in response to the model merging instruction of the first user to generate a first merging model, and determining target recommendation content for the first user based on the obtained first merging model in response to the merging completion instruction of the first user. On the one hand, the user can autonomously select the recommendation model to perform model training, and the accuracy of recommendation content distribution is improved by means of autonomously and rapidly learning the recommendation model. On the other hand, the user does not belong to the non-perception state of the recommendation model any more, the capability of autonomous control of recommendation algorithm model learning is provided for the user, and the control feeling of the user can be greatly improved. On the other hand, the self-help of the user carries out the fast learning recommendation algorithm, so that the adaptation cost of the cold start user is greatly reduced, and the loss rate of the related user can be greatly reduced.
According to a second aspect of the embodiments of the present disclosure, there is provided a content recommendation system, referring to fig. 6, fig. 6 is a block diagram of a content recommendation system according to an exemplary embodiment. The content recommendation system 600 includes a user side 610, a model training side 620, and a content recommendation platform 630. Specific:
The method comprises the steps of providing a model operation interface and displaying target recommended content, receiving a model operation by a user based on the model operation interface, wherein the model operation comprises a model selection operation and a model removal operation and model sharing operation, a model training end 620 used for merging a first recommended model determined based on the model selection operation to a current recommended model to generate a first merged model, removing at least one recommended model in the first merged model according to the model removal operation of the first merged model to generate a second merged model, and a content recommending platform 630 used for determining target recommended content to the user according to the merged model and sending the target recommended content to the user.
The model operation interface may be a user interaction interface used for performing related model operations. The model operation may be a user operation performed on the recommendation model. The model selection operation may be a related operation in which the user selects one or more recommendation models. The model removal operation may be a related operation by which the user removes one or more recommended models from the merge model. The model sharing operation may be related operation that a user shares a recommendation model adopted by the user to other users, so that the other users can perform model merging training based on the shared recommendation model.
The user terminal can provide a user model operation interface, a user performs model sharing operation through the model operation interface, and a model sharing option is opened so that other users can acquire the recommendation algorithm model. The user can also perform model selection operation through the model operation interface to obtain the recommendation models of other users so as to quickly and purposefully train the recommendation models of the user. When the user selects the first recommendation model through the model selection operation, the model training end can merge the first recommendation model into the current recommendation model to generate a first merged model. After the first merge model is generated, the content recommendation platform determines target recommended content to the user according to the first merge model. In the process of receiving the recommended content, if the user wants to reduce pushing of some recommended content, the corresponding recommended model can be removed through a model removing operation to obtain a second combined model, and at this time, the content recommendation platform determines target recommended content for the user according to the second combined model.
Fig. 7 is a block diagram of a content recommendation device, according to an example embodiment. Referring to fig. 7, the content recommendation apparatus 700 includes a model presentation module 710, a model merging module 720, and a first content recommendation module 730. Specific:
The model presentation module 710 is configured to perform a model selection operation for receiving a first user, and select at least one first recommendation model for presentation.
The model merging module 720 is configured to execute a model merging instruction responsive to the first user, merge at least one first recommendation model corresponding to the model merging instruction into the current recommendation model, to generate a first merged model.
The first content recommendation module 730 is configured to execute a merge completion instruction in response to the first user, and determine target recommended content for the first user based on the obtained first merge model.
In one exemplary embodiment of the present disclosure, the content recommendation device 700 further includes a second content recommendation module configured to execute, in response to a model removal instruction of the first user, removing the recommendation models that have been combined in the first combination model to generate a second combination model, and in response to a removal completion instruction of the first user, determining target recommendation content for the first user based on the obtained second combination model.
In an exemplary embodiment of the present disclosure, the model presentation module 710 further includes a model presentation unit configured to perform receiving a model selection operation performed by the first user based on the model selection area, determining a model keyword corresponding to the model selection operation, and selecting at least one first recommendation model from the model library to be presented on the interactive interface according to the model keyword.
In one exemplary embodiment of the present disclosure, the first content recommendation module 730 includes a first content recommendation unit configured to perform determining model weights of recommendation models in the first merge model, determining content acquisition weights of the first user for the recommendation models according to the model weights, and assigning corresponding amounts of content requests to the recommendation models according to the content acquisition weights to determine target recommended content according to the amounts of content requests.
In one exemplary embodiment of the present disclosure, the first content recommendation unit includes a first weight determination subunit configured to perform acquiring historical recommendation data of each recommendation model in the first merge model, the historical recommendation data including a plurality of recommendation influence factors, determining factor weights for each recommendation influence factor from the historical recommendation data, and performing a weighted calculation based on the plurality of factor weights to determine model weights.
In one exemplary embodiment of the present disclosure, the first content recommendation unit includes a second weight determination subunit configured to perform determining content tags for each recommendation model in the first merge model, determining model priorities for each recommendation model based on the content tags, and determining model weights corresponding to each recommendation model based on the model priorities.
In one exemplary embodiment of the present disclosure, the content recommendation device 700 further includes a third content recommendation module configured to perform determining an associated user associated with the first user, generating an associated user group based on the first user and the associated user, determining an associated group recommended content corresponding to the associated user group in response to a model training operation of any of the associated user groups, and presenting the associated group recommended content to any of the associated user groups.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
An electronic device 800 according to such an embodiment of the present disclosure is described below with reference to fig. 8. The electronic device 800 shown in fig. 8 is merely an example and should not be construed to limit the functionality and scope of use of embodiments of the present disclosure in any way.
As shown in fig. 8, the electronic device 800 is embodied in the form of a general purpose computing device. The components of electronic device 800 may include, but are not limited to, at least one processing unit 810 described above, at least one storage unit 820 described above, a bus 830 connecting the various system components (including storage unit 820 and processing unit 810), and a display unit 840.
Wherein the storage unit stores program code that is executable by the processing unit 810 such that the processing unit 810 performs steps according to various exemplary embodiments of the present disclosure described in the above section of the present specification.
Storage unit 820 may include readable media in the form of volatile storage units such as Random Access Memory (RAM) 821 and/or cache memory unit 822, and may further include Read Only Memory (ROM) 823.
The storage unit 820 may include a program/utility 824 having a set (at least one) of program modules 825, such program modules 825 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
Bus 830 may represent one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 800 may also communicate with one or more external devices 870 (e.g., keyboard, pointing device, bluetooth device, etc.), one or more devices that enable a user to interact with the electronic device 800, and/or any device (e.g., router, modem, etc.) that enables the electronic device 800 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 850. Also, electronic device 800 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 860. As shown, network adapter 860 communicates with other modules of electronic device 800 over bus 830. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 800, including, but not limited to, microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In an exemplary embodiment, a computer readable storage medium is also provided, e.g., a memory, comprising instructions executable by a processor of an apparatus to perform the above method. Alternatively, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising a computer program/instruction, characterized in that the computer program/instruction, when executed by a processor, implements the content recommendation method according to any of the preceding claims.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.