[go: up one dir, main page]

CN111782982B - Method, device and computer readable storage medium for ordering search results - Google Patents

Method, device and computer readable storage medium for ordering search results Download PDF

Info

Publication number
CN111782982B
CN111782982B CN201910419029.6A CN201910419029A CN111782982B CN 111782982 B CN111782982 B CN 111782982B CN 201910419029 A CN201910419029 A CN 201910419029A CN 111782982 B CN111782982 B CN 111782982B
Authority
CN
China
Prior art keywords
input
feature
input features
machine learning
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910419029.6A
Other languages
Chinese (zh)
Other versions
CN111782982A (en
Inventor
邱德军
任恺
刘燚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201910419029.6A priority Critical patent/CN111782982B/en
Publication of CN111782982A publication Critical patent/CN111782982A/en
Application granted granted Critical
Publication of CN111782982B publication Critical patent/CN111782982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0623Electronic shopping [e-shopping] by investigating goods or services
    • G06Q30/0625Electronic shopping [e-shopping] by investigating goods or services by formulating product or service queries, e.g. using keywords or predefined options

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a method, a device and a computer readable storage medium for ordering search results, and relates to the technical field of computers. The method comprises the following steps: determining a corresponding machine learning model according to the received sorting request of the search results; determining input features to be generated according to the machine learning model; determining whether to store the input features according to the fact that the input features can be multiplexed by other machine learning models; the search results are ranked using a machine learning model according to the input features. The technical scheme of the disclosure can improve the processing efficiency.

Description

Method, device and computer readable storage medium for ordering search results
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a search result sorting method, a search result sorting device, and a computer readable storage medium.
Background
In the face of massive network resources, the search function can provide a portal for the user so that the user can start from the search to reach any desired target. In the e-commerce field, a user can acquire a desired item information list through a search function. It is therefore important that the search results be organized in order of importance for presentation to the user.
In the related art, a desired feature is generated for each machine learning model, and then ranking is performed using a plurality of machine learning models.
Disclosure of Invention
The inventors of the present disclosure found that the above-described related art has the following problems: the same features in the whole searching and sorting process need to be repeatedly generated, so that the waste of online computing resources is caused, and the processing efficiency is low.
In view of this, the disclosure proposes a technical solution for sorting search results, which can improve processing efficiency.
According to some embodiments of the present disclosure, there is provided a method of ranking search results, including: determining a corresponding machine learning model according to the received sorting request of the search results; determining input features to be generated according to the machine learning model; determining whether to store the input features according to the fact that the input features can be multiplexed by other machine learning models; and sorting the search results by using the machine learning model according to the input features.
In some embodiments, determining whether the computational cost of the input feature is greater than a threshold; and determining to store the input characteristic in the case that the calculation cost is greater than the threshold value.
In some embodiments, the length of time that the input features are stored is determined from the range over which the input features can be multiplexed.
In some embodiments, where the input features can be multiplexed in a lifecycle of the respective feature generation operator, the input features are stored in the lifecycle; in case the input features can be multiplexed in a current ranking process, the input features are stored in the current ranking process.
In some embodiments, determining an identity of the input feature from the machine learning model; and according to the identification of the input feature, calling a corresponding feature generation operator to generate the input feature.
In some embodiments, the input feature is a plurality of input features; calculating a common intermediate quantity for generating the plurality of input features; the plurality of input features are generated from the common intermediate quantity.
In some embodiments, determining relevant entity samples according to the sorting request, wherein each entity sample feature set comprises one or more of the plurality of input features; generating a common input feature for a plurality of said physical samples; and generating unique input features of the entity samples respectively.
In some embodiments, the input features are stored for training a corresponding machine learning model in the event that the ordering request satisfies a predetermined condition.
In some embodiments, invoking a feature generation operator corresponding to the input feature as a first feature generation operator; under the condition that the first feature generating operator needs to rely on other input features to generate the input features, calling a second feature generating operator corresponding to the other input features to generate the other input features; and generating the input features by using the first feature generation operator according to the other input features.
In some embodiments, an instance of the feature generation operator is created at a global scope using a software entity where only static data is required as input by the feature generation operator for the input feature.
In some embodiments, determining a software entity corresponding to the feature generation operator according to registration information of the feature generation operator; multiplexing, with the software entity, the instance of the feature generation operator to generate the input feature in the presence of the instance of the feature generation operator; in the absence of an instance of the feature generation operator, generating an instance of the feature generation operator with the software entity to generate the input feature.
In some embodiments, converting the input features into a format required by the machine learning model; and sorting the search results by using the machine learning model according to the converted input characteristics.
According to further embodiments of the present disclosure, there is provided a ranking apparatus of search results, including: the determining unit is used for determining a corresponding machine learning model according to the received ordering request of the search result, determining input features to be generated according to the machine learning model, and determining whether to store the input features according to the condition that the input features can be multiplexed by other machine learning models; and the ranking unit is used for ranking the search results by using the machine learning model according to the input characteristics.
According to still further embodiments of the present disclosure, there is provided a ranking apparatus of search results, including: the processor is used for determining a corresponding machine learning model according to a received ordering request of the search results, determining input features to be generated according to the machine learning model, determining whether to store the input features according to the condition that the input features can be multiplexed by other machine learning models, and ordering the search results according to the input features by utilizing the machine learning model; and a memory for storing the input features.
According to still further embodiments of the present disclosure, there is provided a ranking apparatus of search results, including: a memory; and a processor coupled to the memory, the processor configured to perform the method of ranking search results in any of the embodiments described above based on instructions stored in the memory device.
According to still further embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the method of ranking search results in any of the embodiments described above.
In the above embodiment, input features that can be multiplexed by a plurality of machine learning models are cached, and repeated generation of features in the sorting process is avoided. Thus, on-line computing resources can be saved, and the processing efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a flow chart of some embodiments of a method of ranking search results of the present disclosure;
FIG. 2 illustrates a flow chart of some embodiments of the present disclosure generating input features;
FIG. 3 illustrates a flow chart of other embodiments of the present disclosure for generating input features;
FIG. 4 illustrates a flow chart of still further embodiments of the present disclosure for generating input features;
FIG. 5 illustrates a flow chart of some embodiments of an example creation method of a feature generation operator of the present disclosure;
FIG. 6 illustrates a block diagram of some embodiments of a ranking apparatus of search results of the present disclosure;
FIG. 7 illustrates a block diagram of further embodiments of a search result ordering apparatus of the present disclosure;
FIG. 8 illustrates a block diagram of still further embodiments of a ranking apparatus of search results of the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
FIG. 1 illustrates a flow chart of some embodiments of a method of ranking search results of the present disclosure.
As shown in fig. 1, the method includes: step 110, determining a machine learning model required by sequencing; step 120, determining input features to be generated; step 130, determining whether to store the input features; and step 140, sorting the search results.
In step 110, a corresponding machine learning model is determined based on the received ranking request of search results. For example, different machine learning models may be employed to complete the ordering for different ordering requests. Each ordering may be accomplished by invoking multiple machine learning models. Each machine learning model requires at least one input feature to complete the ranking.
In step 120, input features that need to be generated are determined from the machine learning model. For example, after determining the input features that need to be generated, the corresponding feature generation operators may be utilized to generate the input features that are needed by the machine learning model.
In some embodiments, the feature generation operator may be a software entity in the framework for computing corresponding features from the raw data as input to the machine learning model. For example, the raw data may be sales data for an item over a period of time, and the feature generated by the feature generation operator may be a score for the decay of the item over time. The original data may also be a descriptive text of a certain item, and the feature generated by the feature generator may be a corresponding feature vector of the descriptive text.
In some embodiments, the input features may be generated by the steps of fig. 2.
Fig. 2 illustrates a flow chart of some embodiments of the present disclosure to generate input features.
As shown in fig. 2, the process includes: step 210, determining an identification of an input feature; and step 220, generating input features.
In step 210, an identification of the input feature is determined based on the machine learning model. For example, a unique number may be set for each input feature as an identification of that input feature.
In step 220, a corresponding feature generation operator is invoked to generate input features based on the identification of the input features. For example, a feature management module may be provided to look up the corresponding feature generation operator based on the number of the requested input feature.
Therefore, the method is equivalent to setting a unified interface suitable for all the feature generation operators, hiding the details of feature generation, and only realizing the unified interface for each feature generation operator, thereby improving the processing efficiency of the system.
In some embodiments, the feature generation operator may pre-generate the input features to be generated. For example, in the case where a plurality of input features are required, a common intermediate amount for generating a plurality of input features may be calculated, and a plurality of input features may be generated based on the common intermediate amount.
In some embodiments, pre-generation may also be achieved by the steps in fig. 3.
FIG. 3 illustrates a flow chart of other embodiments of the present disclosure for generating input features.
As shown in fig. 3, the process includes: step 310, determining a relevant entity sample; step 320, generating common input features; and step 330, generating unique input features.
In step 310, relevant entity samples are determined based on the ranking request, each entity sample having in its feature set one or more of a plurality of input features required by the machine learning model.
In some embodiments, the ordering request is: the search results are ranked using "top-up" as a key, and the required input features include input feature 1, input feature 2, and input feature 3. Related physical samples may include "shirts", "jackets", and the like. The feature set of the shirt comprises an input feature 1 and an input feature 2, and the feature set of the coat comprises an input feature 1 and an input feature 3.
In step 320, common input features for a plurality of physical samples are generated. For example, the common input feature is input feature 1, so the corresponding feature generation operator of input feature 1 may be invoked first to generate input feature 1.
In step 330, unique input features for each entity sample are generated separately. For example, the unique input feature of "shirt" is input feature 2, and the unique input feature of "coat" is input feature 3. The feature generation operators of input feature 2 and input feature 3 may be invoked to generate corresponding features.
The pre-generation of the features can reduce the repetitive work in generating the features for each sample, thereby improving the processing efficiency of the system.
In some embodiments, the feature generation operator must rely on other input features to generate the desired input features, which can be handled by the steps in FIG. 4.
Fig. 4 illustrates a flow chart of yet other embodiments of the present disclosure for generating input features.
As shown in fig. 4, the process includes: step 410, invoking a first feature generation operator; step 420, invoking a second feature generation operator; and step 430, generating input features.
In step 410, a feature generation operator corresponding to the input feature is invoked as a first feature generation operator.
In step 420, in the case that the first feature generation operator needs to rely on other input features to generate input features, invoking a second feature generation operator corresponding to the other input features to generate other input features.
In some embodiments, the identity of the other input feature may be determined, and the corresponding second feature generation operator invoked according to the identity.
In step 430, input features are generated from the other input features using the first feature generation operator. For example, data cleaning may be performed after the input features are generated.
In some embodiments, after determining the input features that need to be generated, the ordering may be performed by other steps in FIG. 1.
In step 130, it is determined whether to store the input features based on the fact that the input features can be multiplexed by other machine learning models.
In some embodiments, the input features may be generated first and then stored where they can be multiplexed by other machine learning models so that the other machine learning models multiplex the input features.
In some embodiments, the input features that need to be produced may be determined after the machine learning model is determined; determining whether to store the input features based on the fact that the input features can be multiplexed; these input features are then generated and stored, the input features that have been previously determined to need to be stored.
In some embodiments, it may be determined whether the computational cost of the input feature is greater than a threshold. In the event that the computation cost is greater than a threshold, the input feature is stored in a buffer for multiplexing in order to avoid repeated computation of the input feature.
In some embodiments, the length of time that the input features are stored may be determined based on the range over which the input features can be multiplexed. For example, in the case where input features can be multiplexed in the life cycle of the corresponding feature generation operator, the input features are stored in the life cycle; in case the input features can be multiplexed in the current ranking process, the input features are stored in the current ranking process.
Therefore, according to different characteristics of the input features which are required to be generated, whether the generated input features are stored in caches of different levels can be determined, so that the balance between the system processing speed and the storage space is achieved. For example, the different levels of buffering may include buffering during an ordering request session, buffering during the entire lifecycle of feature generation operators, no buffering, and so forth.
In some embodiments, a software entity may be provided in the system as a feature operator factory for creating instances of feature generation operators. A feature operator factory may create instances of multiple feature generation operators.
In some embodiments, where the feature generation operator requires only static data as input, an instance of the feature generation operator is created at a global scope using software entities, thereby improving system processing efficiency. For example, what is needed for a feature generation operator is static data where the user's relevant properties (e.g., name, address, gender, etc.) do not change with the ordering request, i.e., an instance can be created globally for that feature generation operator for multiplexing.
The input features, once generated and stored, may be ordered by other steps in fig. 1.
In step 140, the search results are ranked using a machine learning model based on the input features.
In some embodiments, an instance of the feature generation operator may be created through the steps in fig. 5.
FIG. 5 illustrates a flow chart of some embodiments of an example creation method of a feature generation operator of the present disclosure.
As shown in fig. 5, the method includes: step 510, determining a corresponding software entity; step 520, determining whether an instance exists; step 530, calling an instance to generate an input feature; and step 540, generating instance generation input features.
In step 510, the software entity corresponding to the feature generation operator is determined according to the registration information of the feature generation operator.
In some embodiments, during the system initialization phase, the feature generation operator may self-register with a feature management module in the system, and the registration information may include features that the feature generation operator is capable of generating, features that the feature generation operator depends on, corresponding feature operator factories, and so on.
In step 520, it is determined whether an instance of a feature generation operator exists. In the case where an instance exists, step 530 is performed; in the absence of an instance, step 540 is performed.
In step 530, an instance of the feature generation operator is multiplexed with the software entity to generate the input feature. For example, the feature operator factory implementation creates the instance globally, at which point the instance may be invoked.
In step 540, an instance of a feature generation operator is generated using the software entity to generate the input feature.
In some embodiments, in the event that the ordering request meets a configured storage policy, the input features are stored for training a corresponding machine learning model. For example, a feature collection module may be provided in the system, for collecting the features generated and used in the sorting process in a unified manner, and then sending the collected features to the feature storage terminal.
In some embodiments, different storage policies may be configured according to different attributes of the ordering request (e.g., location information of the user, identity information, various attributes of the item, etc.). For example, the storage policy may be configured to: features generated by search ranking requests originating from users in beijing are stored. In this case, if the location of the originating user of the sorting request is Beijing, the features generated in the sorting process are stored.
In the above-described embodiments, the feature collector is provided to be able to provide a mechanism for controlling feature collection data for a large amount of feature data generated in the search ranking process. The mechanism can decide whether to collect the characteristics generated by the ordering of the requests, how many characteristics are collected, and the like according to the characteristics of the requests. Therefore, on the basis of guaranteeing the authenticity of training data, different collection strategies are not required to be set for different machine learning models, and resources are saved.
In some embodiments, the input features are converted into a format required by the machine learning model; and sorting the search results by using a machine learning model according to the converted input characteristics. For example, machine learning models require that the input be a 256-dimensional real vector, and the generated input features need to be converted to 256-dimensional real vectors.
In the above embodiment, input features that can be multiplexed by a plurality of machine learning models are cached, and repeated generation of features in the sorting process is avoided. Thus, on-line computing resources can be saved, and the processing efficiency is improved.
Fig. 6 illustrates a block diagram of some embodiments of a ranking apparatus of search results of the present disclosure.
As shown in fig. 6, the sorting apparatus 6 includes a determining unit 61 and a sorting unit 64.
The determining unit 61 determines a corresponding machine learning model according to the received ranking request of the search results, determines input features to be generated according to the machine learning model, and determines whether to store the input features according to the case where the input features can be multiplexed by other machine learning models; ranking unit 64 ranks the search results using a machine learning model based on the input features.
In some embodiments, ranking unit 64 converts the input features into a format required by the machine learning model; the ranking unit 64 ranks the search results using a machine learning model based on the converted input features.
In some embodiments, the determining unit 61 determines to store the input feature if it is determined that the calculation cost of the input feature is greater than the threshold.
In some embodiments, the determining unit 61 determines the length of time for storing the input features according to the range in which the input features can be multiplexed.
In some embodiments, where an input feature can be multiplexed in the life cycle of the corresponding feature generation operator, the determination unit 61 determines to store the input feature in the life cycle. In case the input features can be multiplexed in the current sorting process, the determining unit 61 stores the input features in the current sorting process.
In some embodiments, the sorting apparatus 6 further comprises a generating unit 62. The determination unit 61 determines the identity of the input feature according to the machine learning model; the generating unit 62 invokes a corresponding feature generating operator to generate the input feature according to the identification of the input feature.
In some embodiments, the generation unit 62 calculates a common intermediate quantity for generating the plurality of input features; the generating unit 62 generates a plurality of input features from the common intermediate quantity.
In some embodiments, the determining unit 61 determines, according to the sorting request, relevant entity samples, each entity sample having one or more of a plurality of input features in a feature set; the generation unit 62 generates a common input feature of a plurality of entity samples; unique input features for each entity sample are generated separately.
In the above embodiment, input features that can be multiplexed by a plurality of machine learning models are cached, and repeated generation of features in the sorting process is avoided. Thus, on-line computing resources can be saved, and the processing efficiency is improved.
In some embodiments, the sorting apparatus 6 further comprises a storage unit 63 for storing the input features determined to be stored by the determining unit 61. In case the sorting request fulfils a predetermined condition, the storage unit 63 stores input features for training the corresponding machine learning model.
In some embodiments, the generating unit 62 invokes a feature generation operator corresponding to the input feature as the first feature generation operator; the generating unit 62 calls a second feature generating operator corresponding to other input features to generate other input features under the condition that the first feature generating operator needs to rely on the other input features to generate the input features; the generating unit 62 generates input features from other input features using the first feature generating operator.
In some embodiments, the generating unit 62 creates an instance of the feature generation operator at a global scope using the software entity in case only static data is required as input by the feature generation operator corresponding to the input feature.
In some embodiments, the determining unit 61 determines, according to the registration information of the feature generation operator, a software entity corresponding to the feature generation operator; the generating unit 62 multiplexes the instances of the feature generation operator with the software entity to generate the input feature in the presence of the instances of the feature generation operator; the generating unit 62 generates an instance of the feature generation operator using the software entity to generate the input feature in the absence of the instance of the feature generation operator.
FIG. 7 illustrates a block diagram of further embodiments of a search result ordering apparatus of the present disclosure.
As shown in fig. 7, the search result sorting device 7 includes a memory 71 and a processor 72 coupled to the memory 71.
The processor 72 determines a corresponding machine learning model based on the received ranking request for the search results, determines input features to be generated based on the machine learning model, determines whether to store the input features based on the fact that the input features can be reused by other machine learning models, and ranks the search results based on the input features using the machine learning model. The memory 71 stores input features.
In some embodiments, the processor 72 is configured to perform the method of ranking search results in any one of the embodiments of the present disclosure based on instructions stored in the memory 71.
The memory 71 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, application programs, boot Loader (Boot Loader), database, and other programs.
FIG. 8 illustrates a block diagram of still further embodiments of a ranking apparatus of search results of the present disclosure.
As shown in fig. 8, the sorting apparatus 8 of the search result of this embodiment includes: a memory 810 and a processor 820 coupled to the memory 810, the processor 820 being configured to perform the method of ranking search results in any of the foregoing embodiments based on instructions stored in the memory 810.
Memory 810 may include, for example, system memory, fixed nonvolatile storage media, and the like. The system memory stores, for example, an operating system, application programs, boot Loader (Boot Loader), and other programs.
The sorting apparatus 8 of search results may further include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and the processor 820 may be connected by, for example, a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen, and the like. The network interface 840 provides a connection interface for various networking devices. Storage interface 850 provides a connection interface for external storage devices such as SD cards, U-discs, and the like.
It will be appreciated by those skilled in the art that embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Heretofore, a search result ranking method, a search result ranking apparatus, and a computer-readable storage medium according to the present disclosure have been described in detail. In order to avoid obscuring the concepts of the present disclosure, some details known in the art are not described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
The methods and systems of the present disclosure may be implemented in a number of ways. For example, the methods and systems of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
Although some specific embodiments of the present disclosure have been described in detail by way of example, it should be understood by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the present disclosure. It will be appreciated by those skilled in the art that modifications may be made to the above embodiments without departing from the scope and spirit of the disclosure. The scope of the present disclosure is defined by the appended claims.

Claims (15)

1. A method of ranking search results, comprising:
Determining corresponding machine learning models according to the received sorting requests of the search results, wherein for different sorting requests, different machine learning models are determined to finish sorting;
determining an identification of an input feature according to the machine learning model;
According to the identification of the input feature, a corresponding feature generating operator is called to generate the input feature;
Determining whether to store the input features according to the fact that the input features can be multiplexed by other machine learning models;
And sorting the search results by using the machine learning model according to the input features.
2. The ordering method of claim 1, wherein the determining whether to store the input features comprises:
judging whether the calculation cost of the input features is larger than a threshold value or not;
and determining to store the input characteristic in the case that the calculation cost is greater than the threshold value.
3. The sequencing method of claim 1, further comprising:
and determining the time length for storing the input features according to the range in which the input features can be multiplexed.
4. A sorting method according to claim 3, wherein the determining the length of time for which the input features are stored comprises:
Storing the input features in a lifecycle of a respective feature generation operator, in the case that the input features can be multiplexed in the lifecycle;
In case the input features can be multiplexed in a current ranking process, the input features are stored in the current ranking process.
5. The ordering method of claim 1, wherein the input feature is a plurality of input features;
The sorting method further comprises the following steps:
calculating a common intermediate quantity for generating the plurality of input features;
the plurality of input features are generated from the common intermediate quantity.
6. The ordering method of claim 1, wherein the input feature is a plurality of input features;
The sorting method further comprises the following steps:
Determining relevant entity samples according to the sorting request, wherein a feature set of each entity sample comprises one or more of the plurality of input features;
Generating a common input feature for a plurality of said physical samples;
And generating unique input features of the entity samples respectively.
7. The sequencing method of any of claims 1-6, further comprising:
And storing the input features for training a corresponding machine learning model under the condition that the sorting request meets the preset condition.
8. The sequencing method of any of claims 1-6, further comprising:
Invoking a feature generation operator corresponding to the input feature as a first feature generation operator;
Under the condition that the first feature generating operator needs to rely on other input features to generate the input features, calling a second feature generating operator corresponding to the other input features to generate the other input features;
And generating the input features by using the first feature generation operator according to the other input features.
9. The sequencing method of any of claims 1-6, further comprising:
and under the condition that the feature generation operator corresponding to the input feature only needs static data as input, creating an instance of the feature generation operator in a global scope by utilizing a software entity.
10. The sequencing method of claim 9, further comprising:
Determining a software entity corresponding to the feature generating operator according to the registration information of the feature generating operator;
Multiplexing, with the software entity, the instance of the feature generation operator to generate the input feature in the presence of the instance of the feature generation operator;
In the absence of an instance of the feature generation operator, generating an instance of the feature generation operator with the software entity to generate the input feature.
11. The ranking method of any of claims 1-6, wherein ranking the search results using the machine learning model comprises:
Converting the input features into a format required by the machine learning model;
and sorting the search results by using the machine learning model according to the converted input characteristics.
12. A search result ranking apparatus, comprising:
A determining unit, configured to determine a corresponding machine learning model according to a received ranking request of a search result, where for different ranking requests, determining different machine learning models to complete ranking, determining an identifier of an input feature according to the machine learning model, calling a corresponding feature generating operator to generate the input feature according to the identifier of the input feature, and determining whether to store the input feature according to a situation that the input feature can be multiplexed by other machine learning models;
And the ranking unit is used for ranking the search results by using the machine learning model according to the input characteristics.
13. A search result ranking apparatus, comprising:
The processor is used for determining corresponding machine learning models according to the received sorting requests of the search results, wherein sorting is completed by determining different machine learning models according to different sorting requests, determining the identification of input features according to the machine learning models, calling corresponding feature generating operators to generate the input features according to the identification of the input features, determining whether to store the input features according to the condition that the input features can be multiplexed by other machine learning models, and sorting the search results according to the input features by utilizing the machine learning models;
and a memory for storing the input features.
14. A search result ranking apparatus, comprising:
a memory; and
A processor coupled to the memory, the processor configured to perform the method of ranking search results of any one of claims 1-11 based on instructions stored in the memory device.
15. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of ranking search results of any one of claims 1-11.
CN201910419029.6A 2019-05-20 2019-05-20 Method, device and computer readable storage medium for ordering search results Active CN111782982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910419029.6A CN111782982B (en) 2019-05-20 2019-05-20 Method, device and computer readable storage medium for ordering search results

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910419029.6A CN111782982B (en) 2019-05-20 2019-05-20 Method, device and computer readable storage medium for ordering search results

Publications (2)

Publication Number Publication Date
CN111782982A CN111782982A (en) 2020-10-16
CN111782982B true CN111782982B (en) 2024-08-20

Family

ID=72755556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910419029.6A Active CN111782982B (en) 2019-05-20 2019-05-20 Method, device and computer readable storage medium for ordering search results

Country Status (1)

Country Link
CN (1) CN111782982B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484766A (en) * 2016-09-07 2017-03-08 北京百度网讯科技有限公司 Searching method based on artificial intelligence and device
CN108416028A (en) * 2018-03-09 2018-08-17 北京百度网讯科技有限公司 A kind of method, apparatus and server of search content resource

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9070141B2 (en) * 2012-11-30 2015-06-30 Facebook, Inc. Updating features based on user actions in online systems
US10891295B2 (en) * 2017-06-04 2021-01-12 Apple Inc. Methods and systems using linear expressions for machine learning models to rank search results
CN108334627B (en) * 2018-02-12 2022-09-23 北京百度网讯科技有限公司 New media content search method, device and computer equipment
CN109299344B (en) * 2018-10-26 2020-12-29 Oppo广东移动通信有限公司 Method for generating ranking model, method, device and device for ranking search results

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106484766A (en) * 2016-09-07 2017-03-08 北京百度网讯科技有限公司 Searching method based on artificial intelligence and device
CN108416028A (en) * 2018-03-09 2018-08-17 北京百度网讯科技有限公司 A kind of method, apparatus and server of search content resource

Also Published As

Publication number Publication date
CN111782982A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
US9460117B2 (en) Image searching
CN110858172B (en) A method and device for generating automatic test code
US20210390006A1 (en) Method and electronic device for querying application programming interfaces of webpage
CN112597020A (en) Interface testing method and device, computer equipment and storage medium
CN111741104A (en) Method for determining response message, method for configuring response message, device, equipment and storage medium
CN103927314B (en) A kind of method and apparatus of batch data processing
CN111679886A (en) Heterogeneous computing resource scheduling method, system, electronic device and storage medium
CN114491093B (en) Multimedia resource recommendation and object representation network generation method and device
CN107807841B (en) Server simulation method, device, equipment and readable storage medium
CN107733967A (en) Processing method, device, computer equipment and the storage medium of pushed information
CN111047434B (en) Operation record generation method and device, computer equipment and storage medium
CN110515631A (en) Using the generation method of installation data packet, server and computer storage medium
CN110457089B (en) Data acquisition method, data acquisition device, computer readable storage medium and computer equipment
JP2022079755A (en) Apparatus and method for processing patent information, and program
Xu et al. Theoretical results of QoS-guaranteed resource scaling for cloud-based MapReduce
KR20110139896A (en) How to recommend financial products
CN116304224A (en) Request processing method and device, storage medium and electronic equipment
JP2017151860A (en) Program, device, and method for controlling search
EP3285179A1 (en) Data transfer method and device
CN111782982B (en) Method, device and computer readable storage medium for ordering search results
US10979327B2 (en) Monitoring cloud computing
CN115033397B (en) Interface calling method, device, equipment and storage medium
CN114139727A (en) Feature processing method, feature processing device, computing equipment and medium
CN112214704B (en) Page processing method and device
GB2522433A (en) Efficient decision making

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant