CN119151627A - Product recommendation method, device and equipment based on facial expression of user - Google Patents
Product recommendation method, device and equipment based on facial expression of user Download PDFInfo
- Publication number
- CN119151627A CN119151627A CN202410950271.7A CN202410950271A CN119151627A CN 119151627 A CN119151627 A CN 119151627A CN 202410950271 A CN202410950271 A CN 202410950271A CN 119151627 A CN119151627 A CN 119151627A
- Authority
- CN
- China
- Prior art keywords
- recommended
- user
- expression
- commodity
- micro
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0631—Recommending goods or services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q40/00—Finance; Insurance; Tax strategies; Processing of corporate or income taxes
- G06Q40/06—Asset management; Financial planning or analysis
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Accounting & Taxation (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Development Economics (AREA)
- Economics (AREA)
- General Business, Economics & Management (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Game Theory and Decision Science (AREA)
- Human Resources & Organizations (AREA)
- Operations Research (AREA)
- Entrepreneurship & Innovation (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Technology Law (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure relates to a product recommendation method, apparatus and device based on facial expressions of a user. The method comprises the steps of obtaining facial expression information of a user to be recommended, calculating actual micro-expressions of the user to be recommended according to the facial expression information, determining standard micro-expressions corresponding to the actual micro-expressions according to a pre-trained difference feature model, determining a recommended commodity set corresponding to the standard micro-expressions according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended, and recommending commodities in the recommended commodity set to the user to be recommended. By the embodiment, the method and the device adapt to the difference between the facial expressions of different users, and improve the universality and the accuracy of product recommendation.
Description
Technical Field
The embodiment of the specification relates to the technical field of artificial intelligence, in particular to a product recommendation method, device and equipment based on facial expressions of users.
Background
With the development of financial science and technology, the service mode is changed from a traditional face-to-face mode to a remote screen-to-screen mode, and the bank can be helped to promote service capability by observing the real intention and preference of the customer through the facial expression of the user. However, the facial expressions of different users are very different due to thousands of people and thousands of faces, so that the conventional product recommendation method is poor in universality and low in accuracy.
How to improve the universality and the accuracy of product recommendation is a technical problem to be solved at present.
Disclosure of Invention
In order to solve the problems of poor universality and low accuracy of a product recommendation method in the prior art, the embodiment of the specification provides a product recommendation method, device and equipment based on facial expressions of users, a standard microexpressive-commodity set mapping library of each user and a difference characteristic model of actual microexpressions and standard microexpressions of the users are trained, standard microexpressions corresponding to the actual microexpressions of the users to be recommended are determined according to the difference characteristic model of the users to be recommended, a commodity set corresponding to the standard microexpressions is determined from the standard microexpressive-commodity set mapping library, and commodities in the commodity set are recommended to the users to be recommended.
The specific technical scheme of the embodiment of the specification is as follows:
in one aspect, an embodiment of the present disclosure provides a product recommendation method based on a facial expression of a user, including:
Acquiring facial expression information of a user to be recommended;
Calculating the actual micro-expression of the user to be recommended according to the facial expression information;
Determining a standard microexpressive model corresponding to the actual microexpressive according to a pre-trained difference characteristic model;
Determining a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended;
and recommending the commodities in the recommended commodity set to the user to be recommended.
Further, the step of training the difference feature model includes:
Acquiring a plurality of facial expression information of a plurality of training users, and respectively calculating a plurality of first training micro-expressions corresponding to the plurality of facial expression information of each training user and a plurality of emotion information corresponding to the plurality of facial expression information;
Respectively determining at least one standard micro-expression corresponding to each emotion information of each training user according to a pre-constructed standard micro-expression-emotion library, wherein the standard micro-expression-emotion library comprises the corresponding relation between a plurality of standard micro-expressions and emotion information;
respectively calculating the difference between each standard micro-expression and the corresponding first training micro-expression;
and constructing the difference characteristic model according to a plurality of differences of each training user.
Further, separately calculating the difference between each standard microexpressive and the corresponding first training microexpressive further comprises:
vectorizing the first training micro-expression to obtain a first feature vector;
Vectorizing the standard micro expression to obtain a second feature vector;
And calculating difference features between the first feature vector and the second feature vector to obtain differences between the corresponding standard micro-expressions and the first training micro-expressions.
Further, the formula for calculating the difference feature between the first feature vector and the second feature vector is:
wherein DF represents the difference feature, DF 1 represents the first feature vector, and DF 2 represents the second feature vector.
Further, constructing the variance feature model from the plurality of variances for each training user further comprises:
calculating a linear relation between a first training microexpressive expression and a standard microexpressive expression of each training user according to a plurality of difference feature vectors of each training user;
and taking the linear relation as the difference characteristic model.
Further, training the user standard microexpressions-commodity set mapping library to be recommended further comprises:
constructing an initial standard microexpressive-commodity set mapping library of the user to be recommended;
Acquiring facial expression information of the user to be recommended, and calculating a second training microexpressions of the user to be recommended;
Determining a standard microexpressive expression corresponding to a second training microexpressive expression of the user to be recommended according to a pre-trained difference characteristic model;
Determining a predicted recommended commodity set corresponding to the standard microexpressions of the user to be recommended according to the initial standard microexpressions-commodity set mapping library, and recommending the predicted recommended commodity set to the user to be recommended;
Acquiring operation information of the to-be-recommended user on the predicted recommended commodity set, and calculating a trend intention evaluation value according to the operation information;
judging whether the trend intention evaluation value exceeds a first threshold value;
If not, the corresponding relation between the standard micro-expression and the commodity set in the initial standard micro-expression-commodity set mapping library of the user to be recommended is adjusted, and the steps of acquiring facial expression information of the second training user and calculating the second training micro-expression of the second training user are repeatedly executed;
if yes, obtaining a trained standard microexpressive-commodity set mapping library.
Further, the operation information includes at least:
commodity interest behavior data and commodity transaction conversion data.
Further, calculating a trending intention evaluation value from the operation information further includes:
Calculating the commodity interest behavior data to obtain a first estimated value;
calculating the commodity transaction conversion data to obtain a second estimated value;
performing weighted calculation on the first estimated value and the second estimated value to obtain a data surface estimated value;
Based on the commodity interest behavior data and commodity transaction conversion data, clustering the predicted and recommended commodity set by using a clustering algorithm, and calculating an information surface estimated value according to a clustering result;
according to the regular preference of the user to be recommended to each commodity in the predicted recommended commodity set, calculating a knowledge surface estimated value through knowledge induction;
And carrying out weighted calculation on the data plane estimation value, the information plane estimation value and the knowledge plane estimation value to obtain the trend intention estimation value.
Further, the commodity interest behavior data at least comprises commodity clicking times and commodity browsing time lengths.
Further, calculating the commodity interest behavior data to obtain a first estimated value further includes:
and carrying out weighted calculation on the commodity clicking times and the commodity browsing time length to obtain the first estimated value.
Further, the commodity transaction conversion data at least comprises commodity transaction quantity, secondary transaction quantity and negative evaluation quantity.
Further, calculating the commodity transaction conversion data to obtain a second valuation further includes:
Calculating the sum of the commodity transaction amount and the secondary transaction amount to obtain the total commodity transaction amount;
And calculating the ratio of the total commodity transaction amount to the negative evaluation amount to obtain the second evaluation value.
Further, if the trending intention evaluation value exceeds the first threshold, the method further includes:
judging whether the trend intention evaluation value exceeds a second threshold value;
if yes, replacing the standard micro-expression corresponding to the predicted recommended commodity set in the initial standard micro-expression-commodity set mapping library with the second training micro-expression.
Further, if the trending intention evaluation value exceeds the second threshold, the method further comprises:
taking a second training microexpressions corresponding to the trend intention evaluation values exceeding the second threshold value as reference microexpressions;
Weighting and calculating a plurality of reference microexpressions corresponding to the same standard microexpressions according to the corresponding trend intention evaluation values to obtain updated microexpressions;
and replacing the standard micro-expression with the updated micro-expression.
Further, after determining the recommended commodity set corresponding to the standard micro-expression according to the pre-trained standard micro-expression-commodity set mapping library of the user to be recommended, the method further comprises:
Removing commodities which are not matched with the user information of the user to be recommended from the recommended commodity set;
Recommending the commodities in the recommended commodity set to the user to be recommended further comprises:
And recommending the rest commodities in the recommended commodity set to the user to be recommended.
On the other hand, the embodiment of the specification also provides a product recommendation device based on the facial expression of the user, wherein the device comprises:
a facial expression information acquisition unit for acquiring facial expression information of a user to be recommended;
An actual micro-expression calculating unit, configured to calculate an actual micro-expression of the user to be recommended according to the facial expression information;
the standard microexpressive determination unit is used for determining the standard microexpressive corresponding to the actual microexpressive according to the pre-trained difference characteristic model;
The recommended commodity set determining unit is used for determining a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended;
And the commodity recommending unit is used for recommending the commodities in the recommended commodity set to the user to be recommended.
In another aspect, embodiments of the present disclosure further provide a computer device, including a memory, a processor, and a computer program stored on the memory, where the processor implements the method described above when executing the computer program.
In another aspect, the present description embodiment also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method.
Finally, the present description embodiment also provides a computer program product comprising a computer program which, when executed by a processor, implements the above-mentioned method.
According to the embodiment of the specification, the standard micro-expression-commodity set mapping library of each user and the difference characteristic model of the actual micro-expression and the standard micro-expression of the user are trained, the standard micro-expression corresponding to the actual micro-expression of the user to be recommended is determined according to the difference characteristic model of the user to be recommended, the commodity set corresponding to the standard micro-expression is determined from the standard micro-expression-commodity set mapping library, commodities in the commodity set are recommended to the user to be recommended, the difference between facial expressions of different users is adapted, and the universality and the accuracy of product recommendation are improved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an implementation system of a product recommendation method based on facial expressions of a user according to an embodiment of the present disclosure;
Fig. 2 is a schematic flow chart of a shared access method based on a cloud primary storage data volume according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of training the difference feature model according to the embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating the calculation of the difference between each standard micro-expression and the corresponding first training micro-expression according to the embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of constructing the difference feature model according to a plurality of differences of each training user in the embodiment of the present disclosure;
FIG. 6 is a flowchart of training the user to be recommended standard microexpressions-commodity set mapping library according to the embodiment of the present disclosure;
Fig. 7 is a flowchart showing a process of calculating a trend intention evaluation value based on the operation information in the embodiment of the present specification;
FIG. 8 is a schematic flow chart of calculating the commodity transaction conversion data to obtain a second valuation according to the embodiment of the present disclosure;
FIG. 9 is a flowchart of updating a standard micro-expression according to an embodiment of the present disclosure;
FIG. 10 is a flowchart of updating a standard micro-expression according to another embodiment of the present disclosure;
Fig. 11 is a schematic flow chart of removing the merchandise not matched with the user information of the user to be recommended in the embodiment of the present disclosure;
FIG. 12 is a schematic diagram showing a product recommendation device based on facial expressions of a user according to an embodiment of the present disclosure;
Fig. 13 is a schematic diagram showing the structure of a computer device in the embodiment of the present specification.
[ Reference numerals description ]:
101. A terminal;
102. a server;
1201. a facial expression information acquisition unit;
1202. an actual microexpressive calculation unit;
1203. a standard microexpressive determination unit;
1204. a recommended commodity set determining unit;
1205. A commodity recommendation unit;
1302. A computer device;
1304. a processing device;
1306. Storing the resource;
1308. a drive system;
1310. An input/output module;
1312. an input device;
1314. an output device;
1316. a presentation device;
1318. a graphical user interface;
1320. A network interface;
1322. A communication link;
1324. A communication bus.
Detailed Description
The technical solutions of the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is apparent that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the embodiments of the present disclosure, are intended to be within the scope of the embodiments of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and the claims of the embodiments of the present specification and the above-described drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present description described herein may be capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, apparatus, article, or device that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or device.
It should be noted that, in the technical solution of the embodiments of the present disclosure, the acquiring, storing, using, processing, etc. of data all conform to relevant regulations of national laws and regulations.
It should be noted that, in the embodiments of the present disclosure, some existing solutions in the industry such as software, components, models, etc. may be mentioned, and they should be considered as exemplary, only for illustrating the feasibility of implementing the technical solution of the present disclosure, but it does not mean that the applicant has or must not use the solution.
Fig. 1 is a schematic diagram of an implementation system of a product recommendation method based on facial expressions of a user in an embodiment of the present disclosure, including a terminal 101 and a server 102. The terminal 101 and the server 102 may communicate over a network, which may include a local area network (Local Area Network, abbreviated as LAN), a wide area network (Wide Area Network, abbreviated as WAN), the internet, or a combination thereof, and connect to websites, user devices (e.g., computing devices), and backend systems.
The terminal 101 collects facial expression information of the user and transmits the facial expression information to the server 102, the server 102 calculates the facial expression information by using the stored pre-trained difference feature model and the standard micro-expression-commodity set mapping library to obtain a recommended commodity set, and the terminal 101 recommends commodities in the recommended commodity set to the user.
Alternatively, the servers 102 may be nodes of a cloud computing system (not shown), or each server may be a separate cloud computing system, including multiple computers interconnected by a network and operating as a distributed processing system.
In addition, it should be noted that, fig. 1 is only one application environment provided in the embodiment of the present disclosure, and in practical application, other application environments may also be included, which is not limited in the present disclosure.
Aiming at the problems existing in the prior art, the embodiment of the specification provides a product recommendation method based on facial expressions of users, a standard micro-expression-commodity set mapping library of each user and a difference characteristic model of actual micro-expressions and standard micro-expressions of the users are trained, standard micro-expressions corresponding to the actual micro-expressions of the users to be recommended are determined according to the difference characteristic model of the users to be recommended, commodity sets corresponding to the standard micro-expressions are determined from the standard micro-expression-commodity set mapping library, and commodities in the commodity sets are recommended to the users to be recommended. Fig. 2 is a flowchart illustrating a product recommendation method based on facial expressions of a user according to an embodiment of the present disclosure. In this figure, a process of recommending goods to a user according to the facial expression of the user is described. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When a system or apparatus product in practice is executed, it may be executed sequentially or in parallel according to the method shown in the embodiments or the drawings.
As shown in fig. 2, the method may include:
step 201, obtaining facial expression information of a user to be recommended;
step 202, calculating the actual micro-expression of the user to be recommended according to the facial expression information;
step 203, determining a standard microexpressive corresponding to the actual microexpressive according to a pre-trained difference feature model;
204, determining a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended;
and step 205, recommending the commodities in the recommended commodity set to the user to be recommended.
According to the embodiment of the specification, the standard micro-expression-commodity set mapping library of each user and the difference characteristic model of the actual micro-expression and the standard micro-expression of the user are trained, the standard micro-expression corresponding to the actual micro-expression of the user to be recommended is determined according to the difference characteristic model of the user to be recommended, the commodity set corresponding to the standard micro-expression is determined from the standard micro-expression-commodity set mapping library, commodities in the commodity set are recommended to the user to be recommended, the difference between facial expressions of different users is adapted, and the universality and the accuracy of product recommendation are improved.
In this embodiment of the present disclosure, the facial expression information may be information in a video form, when a user browses a commodity through a mobile phone banking app, the mobile phone app obtains a facial video of the user through a camera of the mobile phone after obtaining authorization of the user, and then the mobile phone app sends the facial video to a banking server, and the banking server calculates an actual micro-expression of the user according to the facial video, where the actual micro-expression may be vectorized data, that is, vectorizes the facial video to obtain the actual micro-expression.
Because facial expressions are thousands of people and thousands of faces, the actual micro-expressions obtained by vectorization of a plurality of users are greatly different, and in order to be able to recommend commodities to all users according to the actual micro-expressions, in the embodiment of the specification, standard micro-expressions corresponding to the actual micro-expressions are determined through a pre-trained difference feature model. The difference feature recognition model can determine standard micro-expressions corresponding to actual micro-expressions of all users.
And then determining a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended, wherein each user has a standard micro-expression-commodity set mapping library, the standard micro-expression-commodity set mapping library comprises corresponding relations between a plurality of standard micro-expressions and a plurality of commodity sets, the commodity sets can comprise commodity sets of different types or commodity sets of different grades, such as financial commodity sets, fund commodity sets, live financial commodity sets, regular financial commodity sets and the like, and the embodiment of the specification is not repeated.
The standard micro-expression library is applicable to all users, and the standard micro-expression corresponding to the actual micro-expression of each user is determined by using the trained difference feature model.
Finally, the goods in the recommended goods set are recommended to the user, for example, when the user browses the goods through the mobile phone banking app, the next goods are the goods in the recommended goods set.
According to one embodiment of the present disclosure, as shown in fig. 3, the step of training the difference feature model includes:
Step 301, acquiring a plurality of facial expression information of a plurality of training users and respectively calculating a plurality of first training micro-expressions corresponding to the plurality of facial expression information of each training user and a plurality of emotion information corresponding to the plurality of facial expression information;
Step 302, respectively determining at least one standard micro-expression corresponding to each emotion information of each training user according to a pre-constructed standard micro-expression-emotion library, wherein the standard micro-expression-emotion library comprises corresponding relations between a plurality of standard micro-expressions and emotion information;
Step 303, respectively calculating the difference between each standard micro-expression and the corresponding first training micro-expression;
And 304, constructing the difference characteristic model according to a plurality of differences of each training user.
In the embodiment of the present specification, facial expression information of a user may be identified by an existing emotion identification method, so as to obtain corresponding emotion information, where the emotion information may be happy or angry, and so on.
The pre-constructed standard micro-expression-emotion library comprises corresponding relations between a plurality of standard micro-expressions and emotion, the corresponding emotion information of the standard micro-expressions can be divided into a plurality of standard micro-expressions in advance according to expert experience, one emotion information can correspond to the standard micro-expressions, and finally the corresponding relations are stored in the standard micro-expression-emotion library.
In the embodiment of the present disclosure, after emotion information corresponding to facial expression information of a training user is determined, a plurality of standard micro-expressions corresponding to the emotion information are determined from a standard micro-expression-emotion library.
And respectively calculating the difference between each standard micro-expression and the first training micro-expression, and constructing a difference characteristic model according to the difference.
In this embodiment of the present disclosure, as shown in fig. 4, calculating the difference between each standard micro-expression and the corresponding first training micro-expression, respectively, further includes:
Step 401, vectorizing the first training micro-expression to obtain a first feature vector;
step 402, vectorizing the standard micro-expression to obtain a second feature vector;
And step 403, calculating difference features between the first feature vector and the second feature vector to obtain differences between the corresponding standard micro-expressions and the first training micro-expressions.
In this embodiment of the present disclosure, the first training micro-expression may be vectorized by an Optical flow method (Optical flow), a directional Optical flow Histogram (HOOF), and a main direction average Optical flow (MDMO), to obtain a micro-expression feature set, and a first feature vector corresponding to the micro-expression is obtained through feature fusion.
Illustratively:
The optical flow method calculates the correlation between the pixels in the current frame and the target frame through the characteristic of the image sequence in the time domain so as to express the corresponding relation between frames and is used for expressing the relation between frames of the micro expression sequence, and certain parallax h exists for the pixel value F (x) of the x point in the kth frame and the pixel value G (x) of the x point in the k+1th frame:
F(x+h)=G(x)
Repeating this process results in a series of newton iterations, h converging to an optimal value, where the iteration h (k+1) can be expressed as:
h0=0
Where h k denotes the parallax of the kth frame, f' (x+h k) can be expressed as:
Where function F represents the pixel value.
The optical flow characteristics UIMES _21 for the 1 st to K th frames can be expressed as:
Wherein, The parallax of the x-th point of the k-th frame is represented.
The oriented optical flow Histogram (HOOF) eliminates the influence of horizontal motion on optical flow by redefining the optical flow direction and eliminates the influence of lens distance on optical flow extraction, and for an optical flow vector h of a pixel value F (x) of an x point in a kth frame and a pixel value G (x) of an x point in a k+1th frame, HOOF is calculated by classifying the optical flow vector h based on the direction and by weighting the size of the vector:
h=[x,y]T
wherein x and y represent the abscissa and ordinate, respectively, of the pixel in the kth frame.
Then calculating the included angle theta between the optical flow vector and the transverse axis:
θ=tan-1(y/x)
and (3) carrying out bin processing on the included angle theta, and when the angle theta falls into:
where B represents the total number of bins and B represents the angle θ falling within the B-th bin.
Its amplitude valueAnd B is applied to the B-th bin of the histogram, wherein B is more than or equal to 1 and less than or equal to B, and finally, the histogram is normalized. For each frame of optical flow vector histogram at time t, it can be expressed as:
UIMES_22=[ht;1,ht;2,ht;3,…,ht;B]T
Main direction average optical flow (MDMO) divides a human face into 36 interested areas, and main direction average optical flow characteristics of each area are extracted from a human face image sequence, wherein the main direction optical flow average value of i areas of a kth frame The identification can be as follows:
Wherein B max is the set with the largest number of direction vectors in the statistical histogram, wherein, Representing the directional optical flow histogram at position p.
It should be noted that, the Optical flow method (Optical flow), the Histogram of Oriented Optical Flow (HOOF), and the mean Optical flow in the Main Direction (MDMO) are all common knowledge in the art, and the embodiments of the present disclosure are only illustrative.
After Principal Component Analysis (PCA) is performed on the features in the microexpressive feature set UIMES _2= { UIMES _21, uims_22, uims_23 } a first feature vector DF 1 is obtained. Wherein UIMES _21, UIMS_22 and UIMES _23 are features obtained by the optical flow method, the oriented optical flow histogram and the main direction average optical flow:
DF1=PCA(UIMES_2)
and carrying out vectorization on the standard microexpressions by using the same vectorization method to obtain a second feature vector DF 2.
And then calculating a difference feature between the first feature vector and the second feature vector:
wherein DF represents the difference feature, DF 1 represents the first feature vector, and DF 2 represents the second feature vector.
Because each emotion information in the standard microexpressive-emotion library corresponds to a plurality of standard microexpressions, one training user corresponds to a plurality of first training microexpressions, a difference feature vector DF between a first feature vector DF 1 of one first training microexpressive and a second feature vector DF 2 of a plurality of standard microexpressions is calculated to obtain a plurality of difference feature vectors DF, and a plurality of difference feature vectors DF of a plurality of training users under one emotion information are calculated to form a difference feature vector set F i, wherein i represents the ith emotion information. And forming a difference feature vector set F= [ F 1,F2,F3,...,Fn ] corresponding to all emotion in the standard microexpressive-emotion library, wherein n represents the total number of the emotion information.
According to one embodiment of the present disclosure, as shown in fig. 5, constructing the variance feature model according to the plurality of variances of each training user further includes:
step 501, calculating a linear relation between a first training micro-expression and a standard micro-expression of each training user according to a plurality of difference feature vectors of each training user;
and step 502, taking the linear relation as the difference characteristic model.
In the embodiment of the present disclosure, there is a difference between the actual micro-expression and the standard micro-expression of the user, and all the differences (that is, the difference feature vector DF between each standard micro-expression and the first training micro-expressions of the plurality of users under all the emotion information) are calculated, so that the linear relationship between the first training micro-expression and the standard micro-expression can be fitted according to the difference feature vector, and the linear relationship can be used as the difference feature model.
When the user to be recommended recommends goods, the linear relation is utilized to calculate the standard micro-expression corresponding to the actual micro-expression of the user to be recommended.
According to one embodiment of the present disclosure, as shown in fig. 6, training the user standard microexpressive-commodity set mapping library to be recommended further includes:
Step 601, constructing an initial standard micro expression-commodity set mapping library of the user to be recommended;
Step 602, obtaining facial expression information of the user to be recommended, and calculating a second training microexpressions of the user to be recommended;
step 603, determining a standard microexpressive expression corresponding to a second training microexpressive expression of the user to be recommended according to a pre-trained difference feature model;
Step 604, determining a predicted recommended commodity set corresponding to the standard micro-expression of the user to be recommended according to the initial standard micro-expression-commodity set mapping library, and recommending the predicted recommended commodity set to the user to be recommended;
step 605, acquiring operation information of the user to be recommended on the predicted recommended commodity set, and calculating a trend intention evaluation value according to the operation information;
Step 606, judging whether the trend intention evaluation value exceeds a first threshold value;
If not, adjusting the corresponding relation between the standard micro-expression and the commodity set in the initial standard micro-expression-commodity set mapping library of the user to be recommended, and repeatedly executing the steps of acquiring the facial expression information of the second training user and calculating the second training micro-expression of the second training user;
and 608, if yes, obtaining a trained standard microexpressive-commodity set mapping library.
In the present embodiment, the initial standard microexpressive-commodity set mapping library may be empirically constructed, and the initial standard microexpressive-commodity set mapping library for all users may be the same.
Then, the method of fig. 5 in the embodiment of the present disclosure is adopted to determine the standard micro-expression corresponding to the second training micro-expression of the user, and then, the predicted recommended commodity set corresponding to the standard micro-expression is determined according to the initial standard micro-expression-commodity set mapping library.
Because the predicted recommended commodity set does not meet the needs of the user, after recommending the commodities in the predicted recommended commodity set to the user, operation information of the user on the commodities in the predicted recommended commodity set is acquired, and a trend intention evaluation value is calculated according to the operation information.
And judging whether the trend intention evaluation value exceeds a first threshold value, if so, indicating that the predicted recommended commodity set meets the requirement of a user, and finishing the standard microexpressive-commodity set mapping training of the user.
If not, the predicted recommended commodity set does not meet the requirement of the user, the corresponding relation between the standard micro-expression of the user and the commodity set is required to be adjusted, facial expression information of the user is repeatedly obtained for iterative training until trend intention evaluation values of the second training micro-expression under all emotion information exceed a first threshold value, and a trained standard micro-expression-commodity set mapping library is obtained.
Illustratively, the correspondence between the standard micro-expression and the commodity set in the standard micro-expression-commodity set mapping library may be replaced in sequence, which is not limited in the embodiments of the present specification.
It should be noted that the first threshold may be set empirically, which is not described in the embodiment of the present disclosure.
In the embodiment of the specification, the operation information at least comprises commodity interest behavior data and commodity transaction conversion data.
As shown in fig. 7, calculating the trending intention evaluation value from the operation information further includes:
step 701, calculating the commodity interest behavior data to obtain a first estimated value;
step 702, calculating the commodity transaction conversion data to obtain a second estimated value;
step 703, performing weighted calculation on the first estimated value and the second estimated value to obtain a data plane estimated value;
Step 704, based on the commodity interest behavior data and commodity transaction conversion data, clustering the predicted and recommended commodity set by using a clustering algorithm, and calculating an information surface estimated value according to a clustering result;
step 705, calculating a knowledge surface estimated value through knowledge induction according to the regular preference of the user to be recommended to each commodity in the predicted recommended commodity set;
and 706, carrying out weighted calculation on the data plane estimation value, the information plane estimation value and the knowledge plane estimation value to obtain the trend intention estimation value.
In this embodiment of the present disclosure, the commodity interest behavior data includes at least a commodity click number and a commodity browsing duration.
Calculating the commodity interest behavior data to obtain a first estimated value further comprises:
and carrying out weighted calculation on the commodity clicking times and the commodity browsing time length to obtain the first estimated value.
The calculation formula is as follows:
PEV_1=α*CN+β*ST
Wherein pev_1 represents a first evaluation value, α and β represent weights, CN represents the number of clicks of the commodity, and ST represents the commodity browsing duration.
The commodity transaction conversion data at least comprises commodity transaction quantity, secondary transaction quantity and negative evaluation quantity.
As shown in fig. 8, calculating the commodity transaction conversion data to obtain a second valuation further includes:
Step 801, calculating the sum of the commodity transaction amount and the secondary transaction amount to obtain the total commodity transaction amount;
step 802, calculating the ratio of the total commodity transaction amount to the negative evaluation amount to obtain the second evaluation value.
The calculation formula is as follows:
Where pev_2 represents the second valuation, TV represents the commodity transaction amount, ATV represents the secondary transaction amount, NEN represents the negative-going evaluation amount.
Specifically, the first estimation value and the second estimation value are weighted and calculated, and the formula for obtaining the estimation value of the data surface is as follows:
PEV_DATA=a1*PEV_1+a2*PEV_2
where pev_data represents the DATA plane estimate and a1 and a2 represent weights.
In this embodiment of the present disclosure, based on the commodity interest behavior data and commodity transaction conversion data, clustering the predicted recommended commodity set using a clustering algorithm, and calculating the information plane estimate according to the clustering result includes:
Build categories include "interesting", "needed", "satisfactory", "purchasing", "know", "offensive", "superfluous", "quality problem", and so forth. Based on the user behavior information and the product transaction information, the product intention category is classified by using a clustering algorithm.
Selecting K of the categories belonging to the set K, k=5, for example;
calculating the distances dis from all nodes to k categories on the information map I:
Where I represents all attributes on inode I, such as the number of clicks. K represents all attributes defining category K, with a rate of click through rate exceeding 3 indicating "interesting".
Then, calculating information entropy for the event that the product intention category accurately matches the user intention:
Wherein H represents an entropy value, n represents the total number of nodes of the information map, and p (x i) represents the probability that the nodes of the information map represent the event.
Then calculating an Information plane estimation by the formula PEV information=δ×hmin, wherein PEV Information represents the Information plane estimation, δ represents the adjustment coefficient, hmin represents the minimum entropy value.
And then calculating a Knowledge plane estimated value pev_knowledges according to the regular preference of the user to be recommended to each commodity in the predicted recommended commodity set through Knowledge induction, and in an exemplary manner, the estimated value pev_knowledges can be obtained through correlation analysis and calculation, and the common sense technology is not repeated in the embodiment of the present specification.
And carrying out weighted calculation on the data plane estimated value, the information plane estimated value and the knowledge plane estimated value to obtain the trend intention estimated value, wherein the formula is as follows:
PEV=b1*PEVDATA+b2*PEV_Information+b3*PEV_Knowledge
Wherein PEV represents a trend intention evaluation value, and b1, b2, b3 represent weights.
According to one embodiment of the present specification, as shown in fig. 9, the trending intention evaluation value exceeds a first threshold value, the method further comprising:
Step 901, judging whether the trend intention evaluation value exceeds a second threshold value;
and 902, if so, replacing the standard micro-expression corresponding to the predicted recommended commodity set in the initial standard micro-expression-commodity set mapping library with the second training micro-expression.
In this embodiment of the present disclosure, the second threshold may be set empirically, and when the trend intention evaluation value exceeds the second threshold, it indicates that the recommended merchandise set is very high in accordance with the needs of the user, and the second training microexpressions of the user are more typical at this time.
Further, as shown in fig. 10, if the trending intention evaluation value exceeds the second threshold value, the method further includes:
Step 1001, taking a second training microexpressions corresponding to the trend intention evaluation values exceeding the second threshold value as reference microexpressions;
Step 1002, carrying out weighted calculation on a plurality of reference micro-expressions corresponding to the same standard micro-expression according to the corresponding trend intention evaluation value to obtain an updated micro-expression;
and step 1003, replacing the standard micro-expression with the updated micro-expression.
In this embodiment of the present disclosure, a plurality of second training micro-expressions meeting the requirement of replacing the standard micro-expression may be weighted according to the corresponding trend intention evaluation values to obtain an updated micro-expression, and finally the standard micro-expression is replaced with the updated micro-expression.
According to one embodiment of the present disclosure, as shown in fig. 11, after determining the recommended product set corresponding to the standard micro-expression according to the pre-trained standard micro-expression-product set mapping library of the user to be recommended, the method further includes:
step 1011, removing the goods which are not matched with the user information of the user to be recommended from the recommended goods set;
Recommending the commodities in the recommended commodity set to the user to be recommended further comprises:
and 1012, recommending the remaining commodities in the recommended commodity set to the user to be recommended.
In the embodiment of the present specification, the recommended merchandise set is determined according to the facial expression of the user, so that there may be merchandise in the recommended merchandise set that does not match the user information, and the user cannot exemplify the business of the product although the user intention is high. Therefore, the embodiment of the specification eliminates the commodity which is not matched with the user information from the recommended commodity set, and recommends the rest commodity in the recommended commodity set to the user.
Illustratively, the user information of the user may include identity information, asset information, credit information, etc., and embodiments of the present description are not limited.
Based on the same inventive concept, the embodiment of the present disclosure further provides a product recommendation device based on facial expressions of a user, as shown in fig. 12, the device includes:
A facial expression information acquisition unit 1201 configured to acquire facial expression information of a user to be recommended;
An actual micro-expression calculating unit 1202, configured to calculate an actual micro-expression of the user to be recommended according to the facial expression information;
a standard micro-expression determining unit 1203, configured to determine a standard micro-expression corresponding to the actual micro-expression according to a pre-trained difference feature model;
A recommended commodity set determining unit 1204, configured to determine a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended;
And a commodity recommending unit 1205, configured to recommend the commodity in the recommended commodity set to the user to be recommended.
Further, the product recommendation device based on the facial expression of the user further comprises a difference feature model training unit, which is used for:
Acquiring a plurality of facial expression information of a plurality of training users, and respectively calculating a plurality of first training micro-expressions corresponding to the plurality of facial expression information of each training user and a plurality of emotion information corresponding to the plurality of facial expression information;
Respectively determining at least one standard micro-expression corresponding to each emotion information of each training user according to a pre-constructed standard micro-expression-emotion library, wherein the standard micro-expression-emotion library comprises the corresponding relation between a plurality of standard micro-expressions and emotion information;
respectively calculating the difference between each standard micro-expression and the corresponding first training micro-expression;
and constructing the difference characteristic model according to a plurality of differences of each training user.
Further, separately calculating the difference between each standard microexpressive and the corresponding first training microexpressive further comprises:
vectorizing the first training micro-expression to obtain a first feature vector;
Vectorizing the standard micro expression to obtain a second feature vector;
And calculating difference features between the first feature vector and the second feature vector to obtain differences between the corresponding standard micro-expressions and the first training micro-expressions.
Further, the formula for calculating the difference feature between the first feature vector and the second feature vector is:
wherein DF represents the difference feature, DF 1 represents the first feature vector, and DF 2 represents the second feature vector.
Further, constructing the variance feature model from the plurality of variances for each training user further comprises:
calculating a linear relation between a first training microexpressive expression and a standard microexpressive expression of each training user according to a plurality of difference feature vectors of each training user;
and taking the linear relation as the difference characteristic model.
Further, the product recommendation device based on the facial expressions of the user further comprises a standard micro-expression-commodity set mapping library training unit for:
constructing an initial standard microexpressive-commodity set mapping library of the user to be recommended;
Acquiring facial expression information of the user to be recommended, and calculating a second training microexpressions of the user to be recommended;
Determining a standard microexpressive expression corresponding to a second training microexpressive expression of the user to be recommended according to a pre-trained difference characteristic model;
Determining a predicted recommended commodity set corresponding to the standard microexpressions of the user to be recommended according to the initial standard microexpressions-commodity set mapping library, and recommending the predicted recommended commodity set to the user to be recommended;
Acquiring operation information of the to-be-recommended user on the predicted recommended commodity set, and calculating a trend intention evaluation value according to the operation information;
judging whether the trend intention evaluation value exceeds a first threshold value;
If not, the corresponding relation between the standard micro-expression and the commodity set in the initial standard micro-expression-commodity set mapping library of the user to be recommended is adjusted, and the steps of acquiring facial expression information of the second training user and calculating the second training micro-expression of the second training user are repeatedly executed;
if yes, obtaining a trained standard microexpressive-commodity set mapping library.
Further, the operation information includes at least:
commodity interest behavior data and commodity transaction conversion data.
Further, calculating a trending intention evaluation value from the operation information further includes:
Calculating the commodity interest behavior data to obtain a first estimated value;
calculating the commodity transaction conversion data to obtain a second estimated value;
performing weighted calculation on the first estimated value and the second estimated value to obtain a data surface estimated value;
Based on the commodity interest behavior data and commodity transaction conversion data, clustering the predicted and recommended commodity set by using a clustering algorithm, and calculating an information surface estimated value according to a clustering result;
according to the regular preference of the user to be recommended to each commodity in the predicted recommended commodity set, calculating a knowledge surface estimated value through knowledge induction;
And carrying out weighted calculation on the data plane estimation value, the information plane estimation value and the knowledge plane estimation value to obtain the trend intention estimation value.
Further, the commodity interest behavior data at least comprises commodity clicking times and commodity browsing time lengths.
Further, calculating the commodity interest behavior data to obtain a first estimated value further includes:
and carrying out weighted calculation on the commodity clicking times and the commodity browsing time length to obtain the first estimated value.
Further, the commodity transaction conversion data at least comprises commodity transaction quantity, secondary transaction quantity and negative evaluation quantity.
Further, calculating the commodity transaction conversion data to obtain a second valuation further includes:
Calculating the sum of the commodity transaction amount and the secondary transaction amount to obtain the total commodity transaction amount;
And calculating the ratio of the total commodity transaction amount to the negative evaluation amount to obtain the second evaluation value.
Further, the product recommendation device based on the facial expression of the user further comprises a standard micro-expression updating unit for:
If the trend intention evaluation value exceeds the first threshold value, judging whether the trend intention evaluation value exceeds the second threshold value or not;
if yes, replacing the standard micro-expression corresponding to the predicted recommended commodity set in the initial standard micro-expression-commodity set mapping library with the second training micro-expression.
Further, the standard microexpressive expression updating unit is further configured to use a second training microexpressive expression corresponding to the trend intention evaluation value exceeding the second threshold as a reference microexpressive expression;
Weighting and calculating a plurality of reference microexpressions corresponding to the same standard microexpressions according to the corresponding trend intention evaluation values to obtain updated microexpressions;
and replacing the standard micro-expression with the updated micro-expression.
Further, the product recommendation device based on the facial expression of the user further comprises a commodity removing unit for:
Removing commodities which are not matched with the user information of the user to be recommended from the recommended commodity set;
Recommending the commodities in the recommended commodity set to the user to be recommended further comprises:
and recommending the remaining commodities in the recommended commodity set to the user to be recommended.
Since the principle of the solution of the problem of the device is similar to that of the method, the implementation of the system can be referred to the implementation of the method, and the repetition is omitted.
Fig. 13 is a schematic structural diagram of a computer device according to an embodiment of the present disclosure, where the computer device may perform the method according to the embodiment of the present disclosure.
The computer device 1302 may include one or more processing devices 1304, such as one or more Central Processing Units (CPUs), each of which may implement one or more hardware threads. The computer device 1302 may also include any storage resources 1306 for storing any kind of information, such as code, settings, data, etc. By way of non-limiting example, storage resources 1306 may include any one or more combinations of any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, and so forth. More generally, any storage resource may store information using any technology.
Further, any storage resource may provide volatile or non-volatile retention of information.
Further, any storage resources may represent fixed or removable components of computer device 1302. In one case, when the processing device 1304 executes associated instructions stored in any storage resource or combination of storage resources, the computer device 1302 may perform any of the operations of the associated instructions. The computer device 1302 also includes one or more drive systems 1308 for interacting with any storage resources, such as a hard disk drive system, optical disk drive system, and the like.
The computer device 1302 may also include an input/output module 1310 (I/O) for receiving various inputs (via an input device 1312) and for providing various outputs (via an output device 1314). One particular output mechanism may include a presentation device 1316 and an associated Graphical User Interface (GUI) 1318. In other embodiments, input/output module 1310 (I/O), input device 1312, and output device 1314 may not be included, but merely as a computer device in a network. Computer device 1302 can also include one or more network interfaces 1320 for exchanging data with other devices via one or more communication links 1322. One or more communication buses 1324 couple the above-described components together.
The communication link 1322 may be implemented in any manner, for example, through a local area network, a wide area network (e.g., the internet), a point-to-point connection, etc., or any combination thereof. Communication link 1322 may include any combination of hardwired links, wireless links, routers, gateway functions, name servers, etc., governed by any protocol or combination of protocols.
The present description embodiment also provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method.
The present description also provides computer-readable instructions, wherein the program therein causes a processor to perform the above-described method when the processor executes the instructions.
It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation of the embodiments of the present disclosure.
It should also be understood that, in the embodiments of the present specification, the term "and/or" is merely one association relationship describing the association object, meaning that three relationships may exist. For example, A and/or B may mean that A alone, both A and B, and B alone are present. In the embodiment of the present specification, the character "/", generally indicates that the front and rear associated objects are in an "or" relationship.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the various illustrative elements and steps have been described above generally in terms of function in order to best explain the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments herein.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided in the embodiments of this specification, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices, or elements, or may be an electrical, mechanical, or other form of connection.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purposes of the embodiments of the present description.
In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the embodiments of the present specification is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present specification. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The principles and implementations of the embodiments of the present invention are described in the embodiments of the present invention, and the above description of the embodiments is only for aiding in understanding the methods and core ideas of the embodiments of the present invention, and meanwhile, those skilled in the art will change in terms of the ideas of the embodiments of the present invention, and in light of the foregoing description, the present invention should not be construed as limiting the embodiments of the present invention.
Claims (19)
1. A product recommendation method based on facial expressions of a user, the method comprising:
Acquiring facial expression information of a user to be recommended;
Calculating the actual micro-expression of the user to be recommended according to the facial expression information;
Determining a standard microexpressive model corresponding to the actual microexpressive according to a pre-trained difference characteristic model;
Determining a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended;
and recommending the commodities in the recommended commodity set to the user to be recommended.
2. The method of claim 1, wherein the step of training the differential feature model comprises:
Acquiring a plurality of facial expression information of a plurality of training users, and respectively calculating a plurality of first training micro-expressions corresponding to the plurality of facial expression information of each training user and a plurality of emotion information corresponding to the plurality of facial expression information;
Respectively determining at least one standard micro-expression corresponding to each emotion information of each training user according to a pre-constructed standard micro-expression-emotion library, wherein the standard micro-expression-emotion library comprises the corresponding relation between a plurality of standard micro-expressions and emotion information;
respectively calculating the difference between each standard micro-expression and the corresponding first training micro-expression;
and constructing the difference characteristic model according to a plurality of differences of each training user.
3. The method of claim 2, wherein separately calculating the difference between each standard microexpressive and the corresponding first training microexpressive further comprises:
vectorizing the first training micro-expression to obtain a first feature vector;
Vectorizing the standard micro expression to obtain a second feature vector;
And calculating difference features between the first feature vector and the second feature vector to obtain differences between the corresponding standard micro-expressions and the first training micro-expressions.
4. A method according to claim 3, wherein the formula for calculating the difference feature between the first feature vector and the second feature vector is:
wherein DF represents the difference feature, DF 1 represents the first feature vector, and DF 2 represents the second feature vector.
5. The method of claim 3, wherein constructing the variance feature model from a plurality of variances for each training user further comprises:
calculating a linear relation between a first training microexpressive expression and a standard microexpressive expression of each training user according to a plurality of difference feature vectors of each training user;
and taking the linear relation as the difference characteristic model.
6. The method of claim 1, wherein training the user standard microexpressions-commodity set mapping library to be recommended further comprises:
constructing an initial standard microexpressive-commodity set mapping library of the user to be recommended;
Acquiring facial expression information of the user to be recommended, and calculating a second training microexpressions of the user to be recommended;
Determining a standard microexpressive expression corresponding to a second training microexpressive expression of the user to be recommended according to a pre-trained difference characteristic model;
Determining a predicted recommended commodity set corresponding to the standard microexpressions of the user to be recommended according to the initial standard microexpressions-commodity set mapping library, and recommending the predicted recommended commodity set to the user to be recommended;
Acquiring operation information of the to-be-recommended user on the predicted recommended commodity set, and calculating a trend intention evaluation value according to the operation information;
judging whether the trend intention evaluation value exceeds a first threshold value;
If not, the corresponding relation between the standard micro-expression and the commodity set in the initial standard micro-expression-commodity set mapping library of the user to be recommended is adjusted, and the steps of acquiring facial expression information of the second training user and calculating the second training micro-expression of the second training user are repeatedly executed;
if yes, obtaining a trained standard microexpressive-commodity set mapping library.
7. The method of claim 6, wherein the operation information includes at least:
commodity interest behavior data and commodity transaction conversion data.
8. The method of claim 7, wherein calculating a trending intention assessment value from the operational information further comprises:
Calculating the commodity interest behavior data to obtain a first estimated value;
calculating the commodity transaction conversion data to obtain a second estimated value;
performing weighted calculation on the first estimated value and the second estimated value to obtain a data surface estimated value;
Based on the commodity interest behavior data and commodity transaction conversion data, clustering the predicted and recommended commodity set by using a clustering algorithm, and calculating an information surface estimated value according to a clustering result;
according to the regular preference of the user to be recommended to each commodity in the predicted recommended commodity set, calculating a knowledge surface estimated value through knowledge induction;
And carrying out weighted calculation on the data plane estimation value, the information plane estimation value and the knowledge plane estimation value to obtain the trend intention estimation value.
9. The method of claim 8, wherein the commodity interest behavior data includes at least commodity click times and commodity browsing durations.
10. The method of claim 9, wherein computing the commodity interest behavior data to obtain a first estimate further comprises:
and carrying out weighted calculation on the commodity clicking times and the commodity browsing time length to obtain the first estimated value.
11. The method of claim 8, wherein the commodity transaction conversion data includes at least a commodity transaction amount, a secondary transaction amount, and a negative-going evaluation amount.
12. The method of claim 11, wherein calculating the commodity transaction conversion data to obtain a second valuation further comprises:
Calculating the sum of the commodity transaction amount and the secondary transaction amount to obtain the total commodity transaction amount;
And calculating the ratio of the total commodity transaction amount to the negative evaluation amount to obtain the second evaluation value.
13. The method of claim 6, wherein if the trending intention assessment value exceeds a first threshold, the method further comprises:
judging whether the trend intention evaluation value exceeds a second threshold value;
if yes, replacing the standard micro-expression corresponding to the predicted recommended commodity set in the initial standard micro-expression-commodity set mapping library with the second training micro-expression.
14. The method of claim 13, wherein if the trending intention assessment value exceeds the second threshold, the method further comprises:
taking a second training microexpressions corresponding to the trend intention evaluation values exceeding the second threshold value as reference microexpressions;
Weighting and calculating a plurality of reference microexpressions corresponding to the same standard microexpressions according to the corresponding trend intention evaluation values to obtain updated microexpressions;
and replacing the standard micro-expression with the updated micro-expression.
15. The method of claim 1, wherein after determining the recommended merchandise set corresponding to the standard micro-expression according to the pre-trained standard micro-expression-merchandise set mapping library of the user to be recommended, the method further comprises:
Removing commodities which are not matched with the user information of the user to be recommended from the recommended commodity set;
Recommending the commodities in the recommended commodity set to the user to be recommended further comprises:
and recommending the remaining commodities in the recommended commodity set to the user to be recommended.
16. A product recommendation device based on a facial expression of a user, the device comprising:
a facial expression information acquisition unit for acquiring facial expression information of a user to be recommended;
An actual micro-expression calculating unit, configured to calculate an actual micro-expression of the user to be recommended according to the facial expression information;
the standard microexpressive determination unit is used for determining the standard microexpressive corresponding to the actual microexpressive according to the pre-trained difference characteristic model;
The recommended commodity set determining unit is used for determining a recommended commodity set corresponding to the standard micro-expression according to a pre-trained standard micro-expression-commodity set mapping library of the user to be recommended;
And the commodity recommending unit is used for recommending the commodities in the recommended commodity set to the user to be recommended.
17. A computer device comprising a memory, a processor, and a computer program stored on the memory, characterized in that the processor, when executing the computer program, implements the method of any of claims 1 to 15.
18. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 15.
19. A computer program product, characterized in that the computer program product comprises a computer program which, when executed by a processor, implements the method of any of claims 1 to 15.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410950271.7A CN119151627A (en) | 2024-07-16 | 2024-07-16 | Product recommendation method, device and equipment based on facial expression of user |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410950271.7A CN119151627A (en) | 2024-07-16 | 2024-07-16 | Product recommendation method, device and equipment based on facial expression of user |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN119151627A true CN119151627A (en) | 2024-12-17 |
Family
ID=93809994
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410950271.7A Pending CN119151627A (en) | 2024-07-16 | 2024-07-16 | Product recommendation method, device and equipment based on facial expression of user |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN119151627A (en) |
-
2024
- 2024-07-16 CN CN202410950271.7A patent/CN119151627A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11995702B2 (en) | Item recommendations using convolutions on weighted graphs | |
| CN110462607B (en) | Identifying reason codes from gradient boosters | |
| US11227217B1 (en) | Entity transaction attribute determination method and apparatus | |
| CN108985929B (en) | Training method, business data classification processing method and device, and electronic equipment | |
| CN110503531A (en) | Time-series-aware dynamic social scene recommendation method | |
| CN113516511B (en) | A financial product purchase prediction method, device and electronic equipment | |
| CN112528140A (en) | Information recommendation method, device, equipment, system and storage medium | |
| CN115271886A (en) | Recommended methods and devices for financial products, storage media, and electronic equipment | |
| CN117061322A (en) | Internet of things flow pool management method and system | |
| US20230325630A1 (en) | Graph learning-based system with updated vectors | |
| CN112487284A (en) | Bank customer portrait generation method, equipment, storage medium and device | |
| CN115730125A (en) | Object identification method and device, computer equipment and storage medium | |
| CN118035800A (en) | Model training method, device, equipment and storage medium | |
| CN112817563A (en) | Target attribute configuration information determination method, computer device, and storage medium | |
| CN111523649B (en) | Method and device for preprocessing data aiming at business model | |
| Karthick Myilvahanan et al. | Support vector machine-based stock market prediction using long short-term memory and convolutional neural network with aquila circle inspired optimization | |
| CN119151627A (en) | Product recommendation method, device and equipment based on facial expression of user | |
| US20240161117A1 (en) | Trigger-Based Electronic Fund Transfers | |
| CN114154590B (en) | User clustering method, device, terminal equipment and storage medium | |
| Sopov et al. | Design efficient technologies for context image analysis in dialog HCI using self-configuring novelty search genetic algorithm | |
| CN116796234A (en) | Data processing method, device, equipment and readable storage medium | |
| CN110472680B (en) | Object classification method, device and computer-readable storage medium | |
| CN113989012A (en) | Classification method and device, medium and equipment for borrowing target population of non-performing assets | |
| CN114331500B (en) | Click-through rate prediction method, device, equipment, storage medium and program product | |
| CN111339991A (en) | Human body attribute identification method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |