[go: up one dir, main page]

CN111681092A - Resource scheduling method, server, electronic device and storage medium - Google Patents

Resource scheduling method, server, electronic device and storage medium Download PDF

Info

Publication number
CN111681092A
CN111681092A CN202010319617.5A CN202010319617A CN111681092A CN 111681092 A CN111681092 A CN 111681092A CN 202010319617 A CN202010319617 A CN 202010319617A CN 111681092 A CN111681092 A CN 111681092A
Authority
CN
China
Prior art keywords
service
resource
server
user
resource scheduling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010319617.5A
Other languages
Chinese (zh)
Other versions
CN111681092B (en
Inventor
杜岳欣
房佳斐
吴来祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qiyue Information Technology Co Ltd
Original Assignee
Shanghai Qiyue Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qiyue Information Technology Co Ltd filed Critical Shanghai Qiyue Information Technology Co Ltd
Priority to CN202010319617.5A priority Critical patent/CN111681092B/en
Publication of CN111681092A publication Critical patent/CN111681092A/en
Application granted granted Critical
Publication of CN111681092B publication Critical patent/CN111681092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/06Asset management; Financial planning or analysis

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Technology Law (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Resources & Organizations (AREA)
  • Operations Research (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a resource scheduling method, which comprises the steps that a first server responds to a resource scheduling request sent by a client and sends first service protocol page content to the client, the first server responds to an operation instruction which is sent by the client and indicates that the first service protocol is confirmed, resource scheduling application information is pushed to a second server for risk assessment, and the second server is triggered to apply for resource scheduling to a third server; the first server side sends a resource scheduling result query request to the second server side to obtain a resource scheduling result fed back from the third server side, and the resource scheduling result is fed back to the client side for display; the first, second and third service terminals correspond to the first, second and third service parties providing the first, second and third service services to the user, respectively. The method realizes the resource scheduling method based on multiple service parties and reasonably realizes resource allocation. Correspondingly, the invention also provides a server, electronic equipment and a storage medium.

Description

Resource scheduling method, server, electronic device and storage medium
Technical Field
The present invention relates to the field of computers, and in particular, to a resource scheduling method based on multiple service providers, a service end, an electronic device, and a computer-readable storage medium.
Background
With the development of internet technology, networks are widely and deeply influencing the lives of people. More and more people choose to select the appropriate financial product on the financial institution's official website or APP application.
At present, financial institutions include a loan-aid mode, a joint loan mode and a divided-moistening mode on the cooperation mode of resources, wherein the divided-moistening mode is mainly that the loan-aid institution cooperates with a card-holding fund-consuming company (namely a divided-moistening institution), the fund-consuming company collects all the information of clients and bears risks, and meanwhile, the divided-moistening mode is performed on the basis of services of the loan-aid institution, such as recommendation of qualified clients, client management and post-loan management.
Under the current mode of dividing moist, divide moist mechanism needs to satisfy: 1) pricing guests up to 36%/year may be accepted; 2) approving the asset performance of the loan-aid organization, wherein the loan-aid organization has certain wind control examination and approval capacity and self-supporting risk, and bad assets exist in an acceptance list; 3) the capital cost is controllable, and the yield rate higher than the fixed loan interest rate can be realized under the condition of controllable risk. Therefore, the current distribution and moistening organization is mainly dominated by the money-eliminating company, not the bank. However, it is known that the resources of the bank are the most compared to the money-consuming companies, and in practical applications, the bank accounts for more than 50% of the financial institutions cooperating with the loan-supporting institution, which results in that approximately 70% of the resources with high concentration are not fully utilized, i.e. the resources are wasted due to the unreasonable resource allocation. However, if the money-consuming company is directly replaced by a bank in order to fully utilize resources, the implementation difficulty of the method is high due to the monitoring problems such as pricing and asset performance and the problem of risk bearing. Therefore, if resource allocation needs to be performed reasonably, a bank is needed to provide resources, and accordingly, other organizations and lending institutions need to cooperate to realize pricing, asset performance and other supervision and bearing risks, that is, a resource scheduling method based on multiple service parties is needed, so that resource allocation can be performed reasonably.
The above information disclosed in this background section is only for enhancement of understanding of the background of the disclosure and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of the above, the present specification has been made to provide a multiparty service method, a server, an electronic device and a computer-readable storage medium that overcome or at least partially solve the above problems.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or may be learned by practice of the disclosure.
In a first aspect, the present invention discloses a resource scheduling method based on multiple service parties, including:
the method comprises the steps that a first service end responds to a resource scheduling request sent by a client end, and sends first service protocol page content to the client end to be displayed to a user; the first service protocol page content is acquired by the first server from a second server; the first service end corresponds to a first service party providing first service for the user, and the second service end corresponds to a second service party providing second service for the user;
the first service end responds to an operation instruction which is sent by the client and indicates that a first service protocol is confirmed, resource scheduling application information which is input by a user in advance is pushed to the second service end for risk assessment, intention information which indicates that a service is accepted is generated, and the second service end is triggered to apply for resource scheduling to a third service end on the basis of the intention information; the third server corresponds to a third server providing a third business service for the user;
and the first server sends a resource scheduling result query request to the second server to obtain a resource scheduling result fed back from the third server, and feeds the resource scheduling result back to the client for display.
In an exemplary embodiment of the present disclosure, before the step of responding to the resource scheduling request, the method further includes:
the first service terminal responds to a credit granting request sent by the client terminal, generates first information page content which can be used for the user to input user information, and feeds the first information page content back to the client terminal to be displayed to the user;
the first service end responds to an operation instruction which is sent by the client and represents that the user information is submitted, sends the user information to the third service end through the second service end to initiate credit evaluation, and receives a credit evaluation result returned by the third service end through the second service end; the credit assessment result comprises a credit assessment report of the user and/or a credit assessment score obtained by calculating the credit assessment report and the user information by adopting a preset credit assessment model;
the first service end carries out credit approval based on the credit evaluation result, sends the credit approval result to the third service end for final approval through the second service end, and receives a final approval result returned by the third service end through the second service end;
and the first service terminal generates a corresponding credit approval result based on the final review result, and feeds the credit approval result back to the client terminal for displaying, wherein the credit approval result comprises the maximum resource scheduling limit acquired by the user.
In an exemplary embodiment of the disclosure, before the step of pushing the resource scheduling application information to the second server for risk assessment, the method further includes:
identifying whether the user is a user type; if the service is a decoupling user or a new user, initiating a credit evaluation request to the third service end again through the second service end, and carrying out credit approval based on a new credit evaluation result;
the decoupling users are users of which the latest credit investigation time exceeds the preset time.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
and the first service terminal judges whether a time node is generated for the resource returning plan or not at present based on the system time, and if so, generates a corresponding resource returning plan based on the resource distribution information of the current day and feeds the corresponding resource returning plan back to the client terminal for displaying.
In an exemplary embodiment of the present disclosure, the step of generating the resource return plan specifically includes:
the first service end obtains the total number of the additional resources generated by applying for resource scheduling and a preset resource returning time node from the second service end;
the first server calculates the total number of resources to be returned of the user according to the resource distribution information and the total number of the additional resources, and generates the resource returning plan based on the total number of the resources to be returned and the resource returning time node;
the total number of the additional resources comprises a first additional resource number obtained by calculation of the third server and a second additional resource number obtained by calculation of the second server.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
and the first service end responds to the resource returning request sent by the client, and sends a first resource scheduling instruction to a third-party resource scheduling service end associated with the user based on the service identifier of the second service party so as to trigger the third-party resource scheduling service end to schedule the current resource to be returned of the user to the second service party.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
and the first service end responds to a clearing-in-advance request sent by the client, and sends a second resource scheduling instruction to a third-party resource scheduling service end associated with the user based on the service identifier of the second service party so as to trigger the third-party resource scheduling service end to schedule all resources to be returned of the user to the second service party.
In an exemplary embodiment of the present disclosure, the resource scheduling method further includes:
the first service side responds to a third resource scheduling instruction which is sent by the second service side and indicates that the first service side is required to pay second additional resources, and sends a fourth resource scheduling instruction which indicates that payment is confirmed to a third-party resource scheduling service side associated with the first service side, so that the third-party resource scheduling service side associated with the first service side schedules a corresponding amount of second additional resources to the second service side, and the second service side is triggered to send a fifth resource scheduling instruction which indicates resources to be paid to the third-party resource scheduling service side associated with the second service side, and the third-party resource scheduling service side associated with the second service side schedules a corresponding amount of resources to the third service side at a preset scheduling time node.
In a second aspect, the present invention further provides another resource scheduling method based on multiple service parties, including:
the second server calculates the quantity of the resources distributed by each resource receiver and generates corresponding resource distribution details based on the resources to be distributed currently returned by the user and a preset resource distribution rule; the resource allocation rule comprises service identifications of a plurality of resource receivers receiving the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes; the second server corresponds to a second server side providing a second business service for the user;
and the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to perform resource scheduling on the resources to be allocated.
In an exemplary embodiment of the present disclosure, the plurality of resource recipients includes: a first server providing a first business service to the user; a third server for providing a third service to the user; and the second server.
In an exemplary embodiment of the present disclosure, if the return scenario is normal return, the resources to be allocated include quota allocation resources and first additional resources directionally allocated to the third service party, and second additional resources shared by the first service party and the second service party; or,
if the return scene is overdue return, the resources to be allocated further include quota allocation resources and first additional resources directionally allocated to the third service party, second additional resources directionally allocated to the first service party and the second service party, fourth additional resources directionally allocated to the first service party, and fifth additional resources directionally allocated to the first service party and the second service party; or,
if the return scene is cleared in advance, the resources to be allocated further include quota allocation resources and first additional resources directionally allocated to the third service party, and second additional resources shared by the first service party and the second service party are directionally allocated to the third additional resources of the first service party.
In an exemplary embodiment of the present disclosure, the step of calculating the number of resources allocated by each resource receiving side specifically includes:
identifying a current return scene of the user;
if the current return scene of the user is normal return, counting the quota allocation resource and the first additional resource as the resource allocated to the third service party, and respectively calculating the number of the resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage;
if the current return scene of the user is returned after expiration, calculating the quota allocation resource and the first additional resource as the resource allocated to the third service party, taking the fourth additional resource as the resource directionally allocated to the first service party, respectively calculating the number of the resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage, and respectively calculating the number of the resources allocated to the first service party and the second service party based on the fifth additional resource and the third preset percentage;
if the current return scene of the user is cleared in advance, the quota allocation resource and the first additional resource are used as the resource allocated to the third service party, the third additional resource is used as the resource directionally allocated to the first service party, and the number of the resources allocated to the first service party and the second service party is respectively calculated based on the second additional resource, the first preset percentage and the second preset percentage.
In a third aspect, the present invention provides a server, including:
the first response module is used for responding to a resource scheduling request sent by a client and sending first service protocol page content to the client to be displayed to a user; the first service protocol page content is obtained by the first server from a second server in advance; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
a second response module, configured to, in response to an operation instruction sent by the client and indicating that the first service protocol is confirmed, push resource scheduling application information pre-entered by the user to the second server for risk assessment to generate intention information indicating that a service is accepted, and trigger the second server to apply for resource scheduling to a third server based on the intention information; the third server corresponds to a third server providing a third business service for the user;
and the first data acquisition module is used for sending a resource scheduling result query request to the second server to acquire a resource scheduling result fed back from the third server, and feeding back the resource scheduling result to the client for display.
In an exemplary embodiment of the present disclosure, the server further includes:
the third response module is used for responding to a credit granting request sent by the client, generating first information page content which can be used for the user to input user information, and feeding the first information page content back to the client to display the first information page content to the user;
a fourth response module, configured to send, in response to an operation instruction sent by the client and indicating that the user information is submitted, the user information to the third server through the second server to initiate credit evaluation; receiving a credit evaluation result returned by the third server through the second server; the credit assessment result comprises a credit assessment report of the user and/or a credit assessment score obtained by calculating the credit assessment report and the user information by adopting a preset credit assessment model;
and the credit approval module is used for carrying out credit approval based on the credit evaluation result, sending the credit approval result to the third server side for final approval through the second server side, receiving the final approval result returned by the third server side through the second server side, generating a corresponding credit approval result based on the final approval result, feeding the corresponding credit approval result back to the client side for display, and enabling the credit approval result to comprise the maximum resource scheduling amount acquired by the user.
In an exemplary embodiment of the present disclosure, the server further includes:
the second data acquisition module is used for acquiring a credit evaluation record of the user before the second response module triggers the second server to carry out risk evaluation;
the user judging module is used for judging whether the user is a decoupling user according to the credit evaluation record, triggering the fourth response module to initiate a credit evaluation request to the third server through the second server when the user is judged to be the decoupling user, and performing credit granting approval again based on a credit evaluation result; the decoupling user is a user of which the latest credit investigation time exceeds the preset time.
In an exemplary embodiment of the present disclosure, the server further includes:
and the return plan module is used for judging whether a time node is generated for the resource return plan currently or not based on the system time, if so, generating a corresponding resource return plan based on the resource distribution information of the current day and the user information, and feeding back the corresponding resource return plan to the client.
In an exemplary embodiment of the present disclosure, the resource return planning module specifically includes:
an additional resource information obtaining unit, configured to obtain, from the second server, a total number of additional resources generated by applying for resource scheduling and a preset resource returning time node;
the computing unit is used for computing the total number of the resources to be returned of the user according to the resource distribution information of the user on the current day and the total number of the additional resources;
the plan feedback unit is used for generating the resource returning plan based on the total number of the resources to be returned and the resource returning time node and feeding the resource returning plan back to the client;
the total number of the additional resources comprises a first additional resource number obtained by calculation of the third server and a second additional resource number obtained by calculation of the second server.
In an exemplary embodiment of the present disclosure, the server further includes:
and a fifth response module, configured to send, in response to the resource returning request sent by the client, a first resource scheduling instruction to a third-party resource scheduling server associated with the user based on the service identifier of the second server, so as to trigger the third-party resource scheduling server to schedule the current to-be-returned resource of the user to the second server.
In an exemplary embodiment of the present disclosure, the server further includes:
and a sixth response module, configured to send, in response to a resource early clearing request sent by the client, a second resource scheduling instruction to a third-party resource scheduling server associated with the user based on the service identifier of the second server, so as to trigger the third-party resource scheduling server to schedule all resources to be returned of the user to the second server.
In an exemplary embodiment of the present disclosure, the server further includes:
a seventh response module, configured to, in response to a third resource scheduling instruction sent by the second server and indicating that the first server is required to pay for the second additional resource, send a fourth resource scheduling instruction indicating that payment is confirmed to a third-party resource scheduling server associated with the first server, so that the third-party resource scheduling server associated with the first server schedules a corresponding amount of the second additional resource to the second server, so as to trigger the second server to send a fifth resource scheduling instruction indicating a resource to be paid to the third-party resource scheduling server associated with the second server, so that the third-party resource scheduling server associated with the second server schedules a corresponding amount of the resource to the third server at a preset scheduling time node;
the third resource scheduling request is generated when the second server judges that the current overdue time of the user reaches a preset trigger condition based on the system time and the resource returning plan, and the payment request comprises the number of second additional resources requesting the first server to pay for the payment.
In a fourth aspect, the present specification further provides another server, including:
the first calculation module is used for calculating the quantity of resources to be allocated by each resource receiver based on the current returned resources to be allocated of the user and a preset resource allocation rule, and generating corresponding resource allocation details; the resource allocation rule comprises service identifications of a plurality of resource receivers receiving the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes; the second server corresponds to a second server side providing a second business service for the user;
and the resource allocation module is used for sending the resource allocation details to a third-party resource scheduling server associated with the second server so as to trigger the third-party resource scheduling server to perform resource allocation on the resources to be allocated.
In an exemplary embodiment of the present disclosure, the plurality of resource receivers include a first server that receives the fixedly allocated resource and the first additional resource and provides a first business service to the user, the second server, and a third server that provides a third business service to the user.
In an exemplary embodiment of the present disclosure, when the returning scenario is normal returning, the resources to be allocated include quota allocation resources and first additional resources directionally allocated to the third service party, and second additional resources commonly distributed by the first service party and the second service party; or,
when the return scene is overdue return, the resources to be allocated further comprise quota allocation resources and first additional resources directionally allocated to the third service party, second additional resources shared by the first service party and the second service party, fourth additional resources directionally allocated to the first service party, and fifth additional resources shared by the first service party and the second service party; or,
when the return scene is cleared in advance, the resources to be allocated further include quota allocation resources and first additional resources directionally allocated to the third service party, and second additional resources shared by the first service party and the second service party directionally allocate third additional resources to the first service party.
In an exemplary embodiment of the present disclosure, the first calculation module specifically includes;
a return scene recognition unit for recognizing the current return scene of the user;
a first calculating unit, configured to, when the return scenario identification unit identifies that the return scenario is currently a normal return, take the quota allocation resource and the first additional resource as resources allocated to the third service party, and calculate the number of resources allocated to the first service party and the second service party based on the second additional resource, a first preset percentage, and a second preset percentage, respectively;
a second calculating unit, configured to, when the returning scenario identification unit identifies that the returning is currently after expiration, take the quota allocation resource and the first additional resource as resources allocated to the third service provider, calculate, based on the second additional resource, a first preset percentage, and a second preset percentage, the number of resources allocated to the first service provider and the second service provider respectively, take the fourth additional resource as a resource directionally allocated to the first service provider, and calculate, based on the fifth additional resource and a third preset percentage, the number of resources allocated to the first service provider and the second service provider respectively;
a third calculating unit, configured to, when the return scenario identification unit identifies that the return scenario is currently cleared in advance, use the quota allocation resource and the first additional resource as resources allocated to the third service provider, calculate, based on the second additional resource, the first preset percentage, and the second preset percentage, the number of resources allocated to the first service provider and the second service provider, respectively, and use the third additional resource as a resource directionally allocated to the first service provider.
In a fifth aspect, the present specification provides an electronic device comprising a processor and a memory: the memory is used for storing a program of any one of the methods; the processor is configured to execute the program stored in the memory to implement the steps of any of the methods described above.
In a sixth aspect, the present specification provides a computer readable storage medium, on which a computer program is stored, and the computer program is used for implementing the steps of any one of the above methods when executed by a processor.
The invention has the beneficial effects that:
the invention pushes the user information and the resource scheduling information of the user to the second server by the first server, then carries out risk assessment on the user by the second server corresponding to the second server to generate the intention information representing the accepted business, then the second server applies for resource scheduling by the third server based on the intention information, thereby realizing the resource scheduling method based on multiple servers, and the first server and the second server supervise pricing, asset performance and the like in the resource scheduling process and simultaneously undertake risks, while the third server only needs to realize resource configuration, thereby avoiding the problem that a large amount of high-concentration resources are not fully utilized due to the main use of a fund-consuming company, leading the resources to be reasonably distributed, and in the resource configuration process, the third server does not need to undertake any risk and does not need to supervise pricing or asset performance and the like, i.e. the system resources and management costs of the third party are reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for multi-server based resource scheduling in accordance with a first exemplary embodiment;
FIG. 2 is a flow chart illustrating a method for multi-server based resource scheduling in accordance with a second exemplary embodiment;
FIG. 3 is a flow chart illustrating a method for multi-server based resource scheduling in accordance with a third exemplary embodiment;
FIG. 4 is a flowchart illustrating a method for multi-server based resource scheduling in accordance with a fourth exemplary embodiment;
FIG. 5 is a flowchart illustrating a method for multi-server based resource scheduling in accordance with a fifth exemplary embodiment;
FIG. 6 is a flowchart illustrating a method for multi-server based resource scheduling in accordance with a sixth exemplary embodiment;
FIG. 7 is a flowchart illustrating a multi-server based resource scheduling method in accordance with a seventh exemplary embodiment;
FIG. 8 is a block diagram illustrating a server in accordance with another exemplary embodiment;
FIG. 9 is a block diagram illustrating a server in accordance with yet another exemplary embodiment;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
The example embodiments described below may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the disclosure.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various components, these components should not be limited by these terms. These terms are used to distinguish one element from another. Thus, a first component discussed below may be termed a second component without departing from the teachings of the disclosed concept. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It is to be understood by those skilled in the art that the drawings are merely schematic representations of exemplary embodiments, and that the blocks or processes shown in the drawings are not necessarily required to practice the present disclosure and are, therefore, not intended to limit the scope of the present disclosure.
The invention provides a resource scheduling method based on multiple service parties, which is used for solving the problem that high-concentration resources are wasted due to an unreasonable resource allocation/scheduling mode in the prior art, and in order to solve the problem, the general idea of the invention is as follows: the first service end responds to a resource scheduling request sent by the client end, and sends first service protocol page content to the client end to be displayed to a user; the first service protocol page content is acquired by the first server from a second server; the first service end corresponds to a first service party providing first service for a user, and the second service end corresponds to a second service party providing second service for the user; the first service end responds to an operation instruction which is sent by the client and indicates that a first service protocol is confirmed, resource scheduling application information is pushed to the second service end for risk assessment so as to generate intention information indicating that business is carried, and the second service end is triggered to apply for resource scheduling to a third service end based on the intention information; the third server corresponds to a third server providing a third service for the user; and the first server sends a resource scheduling result query request to the second server to obtain a resource scheduling result fed back from the third server, and the resource scheduling result is fed back to the client for display.
In the embodiments of the present invention, the terms referred to are:
the term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The technical solution of the present invention will be described and explained in detail by means of several specific examples.
Referring to fig. 1, a resource scheduling method based on multiple service parties in this embodiment includes:
s101, the first service end responds to a resource scheduling request sent by the client end, and sends first service protocol page content to the client end so as to display a first service protocol page to a user.
In this embodiment, a user first logs in a corresponding web page/APP through a client provided by a first server corresponding to a first server, and then accesses a corresponding resource scheduling application page, specifically, the resource scheduling application page is a page obtained by the client responding to a corresponding access operation of the user, acquiring content of the resource scheduling application page from the first server, and then analyzing and rendering the content, so that the user can enter resource scheduling application information in the resource scheduling application page. Further, after the user enters the resource scheduling application information in the resource scheduling application page and triggers an operation instruction indicating that resource scheduling is applied to a first service party corresponding to the first service end, the client responds to the operation instruction and sends a resource scheduling request to the first service end. Certainly, the resource scheduling request includes user information of the user and entered resource scheduling application information, such as staging return, return mode (waiting for this information or waiting for amount information, etc.), scheduling deadline, first-stage return resource amount, benefit information, resource receiving mode (such as a certain bank card or a payment account number or a micro-information account number), and the like.
In this embodiment, the first service protocol page content is obtained by the first service end from a second service end, where the first service end corresponds to a first service party providing a first service for a user, and the second service end corresponds to a second service party providing a second service for the user. Specifically, the first service side is a loan-aid mechanism providing loan-aid services (i.e., a first business service) for a user, and the first service side is a service side of the loan-aid mechanism, accordingly, the user logs in an official website or an APP application corresponding to the first service side, accesses a resource scheduling application page, fills in resource scheduling application information, and clicks an application submission icon/button of the resource scheduling application page, so that the client generates the resource scheduling request to send to the first service side; the second service side is an insurance organization which provides personal credit guarantee insurance service (second business service) for the user, the second service side is the service side of the insurance organization, the first business agreement page content is the insurance agreement page content, namely the user is used as an applicant to sign a personal credit guarantee insurance related agreement with the insurance company; the third service party which provides the resource scheduling service (namely, the third business service) for the user is used as an insured life; correspondingly, when the first service terminal receives a resource scheduling request sent by the client terminal, the first service terminal obtains the first service protocol page content from the second service terminal in response to the resource scheduling request so as to feed back the first service protocol page content to the client terminal, so that a user can carefully read the specific content of the first service protocol.
S102, the first service end responds to an operation instruction which is sent by the client and indicates that the first service protocol is confirmed, resource scheduling information which is input by the user in advance is pushed to the second service end for risk assessment, intention information which indicates that the service is carried is generated, and the second service end is triggered to apply for resource scheduling to a third service end based on the intention information.
In this embodiment, a client acquires content of a first service protocol page from a first service end, analyzes and renders the content, and then displays the content to a user, after the user finishes reading the first service protocol page, if the user finishes reading the first service protocol and agrees/confirms the content of the first service protocol, the user can directly confirm/agree with an icon in the first service protocol page, and the client correspondingly generates an operation instruction indicating that the user confirms the first service protocol, and then sends the operation instruction to the first service end.
In this embodiment, the third service end corresponds to a third service party that provides a third service to the user. Specifically, the third server is a resource provider, such as a bank, that provides resources to the user, and may provide the user with highly concentrated resources.
In this embodiment, before executing step S102, the method further includes:
identifying the user type of the user, if the user is a decoupling user or a new user, initiating a credit evaluation request to a third server through a second server, and carrying out credit approval based on a credit evaluation result; if the user is not the decoupled user or the new user, the step S102 is directly executed.
In this embodiment, the decoupling user is a credit granting user whose latest credit investigation time exceeds a preset time. Specifically, the preset time is three months, which can be adjusted according to actual needs.
In a specific embodiment, each time the third server performs a credit evaluation (e.g., performs a credit investigation query) on the user, a corresponding credit evaluation record (including a credit investigation reason, a credit evaluation time, and a credit evaluation result (including a credit investigation report)) is generated and fed back to the first server through the second server, so that the first server can determine whether the user is a decoupled user according to the credit evaluation record of the user and the system time, and if so, determine whether the latest credit investigation time of the user exceeds three months according to the system time of the first server, and if so, determine that the user is the decoupled user. And the new user refers to a user who is newly registered and has not been granted, so that if the new user applies for resource scheduling, the third server naturally needs to perform credit evaluation on the new user, and therefore, the approval of the credit is performed according to a credit evaluation result.
In this embodiment, when the first server identifies that the user is a new user and then disconnects the user, a credit evaluation request is generated and sent to the third server through the second server, that is, the credit evaluation request is sent to the third server through the second server, specifically, the credit evaluation request includes user information of the user, a credit evaluation reason, a default credit evaluation method (for example, only credit inquiry is performed, or credit inquiry is combined with a preset credit evaluation model to calculate credit score) and credit evaluation auxiliary data, and sent to the second server, and then the second server sends the third server, that is, the user information and the credit evaluation reason, the credit evaluation method and the credit evaluation auxiliary data are sent to the third server through the second server to initiate credit evaluation, and after the third server performs credit evaluation, and feeding back a credit evaluation result to a second service end, feeding back the credit evaluation result to a first service end by the second service end, examining and approving the first service end according to the credit evaluation result, sending the examination and approval result and the quota to a third service end through the second service end for final examination, feeding back the final examination result to the second service end by the third service end correspondingly, and obtaining the final examination result from the second service end by the first service end so as to finish the credit approval. Specifically, the third server performs credit assessment including querying credit investigation on the user, then calculates credit assessment scores obtained by credit assessment reports and user information of the user according to a preset credit assessment model, and finally feeds back the credit assessment reports and/or credit assessment scores to the second server, and the second server feeds back credit assessment results (including the credit assessment reports and/or credit assessment scores) to the first server.
In a specific embodiment, when the second server performs internal approval (i.e., risk assessment) according to the resource scheduling information received by the second server, if the approval is passed, the second server generates a corresponding intention to accept, i.e., intention information to accept the service, and then the second server initiates a resource scheduling application to the third server based on the intention information, i.e., the first server triggers the second server to apply for resource scheduling to the third server. Correspondingly, a third server side corresponding to the third server side carries out internal examination and approval according to the resource scheduling information, and the third server side feeds back an examination and approval result and a resource scheduling result of the third server side to the second server side, wherein the resource scheduling result comprises a sending resource amount passing the examination and approval, a release date of the resource and the like.
And S103, the first server sends a resource scheduling result query request to the second server to obtain a resource scheduling result fed back by the third server, and the resource scheduling result is fed back to the client for display.
Further, the second server feeds back the deposit reconciliation file to the first server and the third server in day D +1 (i.e. the second day of the deposit day), so that the first server and the third server can perform later reconciliation.
In this embodiment, the first server pushes the resource scheduling application information to the second server, the second server corresponding to the second server performs risk assessment on the user to generate intention information indicating that the user accepts the service, and the second server triggers the third server to perform resource scheduling based on the intention, so that the lending assistant recommends the corresponding user, and the first server and the second server undertake the risk, thereby avoiding the problem that a large amount of high-concentration resources are not fully utilized due to the fact that a fund-consuming company is mainly used, and simultaneously, the third server does not need to undertake the corresponding risk, and the third server does not need to perform pricing or asset performance supervision, and the system resources of the third server are reduced.
Further, since the user usually needs to apply for the credit before applying for the resource scheduling, correspondingly, referring to fig. 2, before executing step S101, the multi-party service method of this embodiment further includes:
s201, the first service terminal responds to a credit granting request sent by the client terminal, generates first information page content for a user to enter user information, and feeds the first information page content back to the client terminal so as to display the first information page to the user.
In this embodiment, the user accesses a first service display page through a client (specifically, when the client responds to an access operation of the user, the client acquires content of the first service display page from the first service to display the first service display page), so as to know relevant information of a first service provided by a first service party, and when the user triggers an operation of applying for credit authorization on the first service display page, the client responds to the operation to generate the credit authorization request, and sends the credit authorization request to the first service to acquire content of the first information page.
S203, the first service end responds to the operation instruction which is sent by the client and represents that the entered user information is submitted, the user information is sent to the third service end through the second service end to initiate a credit evaluation request, and a credit evaluation result returned by the third service end through the second service end is received.
In this embodiment, the client obtains user information corresponding to the user input on the first information page, such as a name, certificate information (e.g., an identity card number or a passport number), bank information (e.g., a bank name and a bank card number), a mobile terminal device identification number (e.g., a mobile phone number), and the like, and after the user information is completely input, the client generates an operation instruction indicating that the user information is submitted based on an operation of the user (e.g., clicking a submit icon configured in advance on the first information page), and sends the operation instruction and the user information to the first server. The operation command carries a reason for initiating credit evaluation, a credit evaluation method used and other credit evaluation auxiliary information transmission.
In this embodiment, after receiving the user information, the first server sends the user information, the credit evaluation reason, the credit evaluation method, and the credit evaluation auxiliary data to the second server, and then the second server sends the third server, that is, the second server sends the user information, the credit evaluation reason, the credit evaluation method, and the credit evaluation auxiliary data to the third server to initiate credit evaluation, and after the third server performs credit evaluation, the credit evaluation result is fed back to the second server, and the second server feeds back the credit evaluation result to the client. Specifically, the third server performs credit evaluation on the user under the operation of the third server, where the credit evaluation includes performing credit investigation on the user, and/or calculating credit investigation score according to a credit investigation report of the user and user information (credit evaluation auxiliary data in the user) by using a preset credit evaluation model (provided by the first server and pre-configured in the third server), and then feeding back the credit investigation report and/or the credit investigation score to the second server, and feeding back the credit investigation report and/or the credit investigation score, i.e., a credit evaluation result, to the first server by the second server. That is, the credit evaluation result includes a credit investigation report and/or credit assessment score of the user.
And S205, the first service end carries out credit approval based on the credit evaluation result, sends the credit approval result to the third service end through the second service end for the third service side to carry out final audit, and receives a final audit result returned by the third service end through the second service end.
In this embodiment, the first service end performs credit approval based on the credit evaluation result under the operation of the first service party, and if the credit evaluation result of the user meets a preset condition, if the credit investigation report shows that the credit condition of the user is good and/or the credit investigation score is larger than the preset credit investigation score threshold value, the user passes the credit approval, and the first service side will give the user a certain credit line, correspondingly, the first service side will send the approval result of the credit authorization to the second service side for review, and the second server sends the approval result of the credit authorization to the third server for the third server to perform final approval (i.e. the third server performs an internal audit on the credit line of the user), and the third service end feeds back the final examination result to the first service end through the second service end, and simultaneously feeds back the review result of the second service side to the first service end.
And S207, the first service terminal generates a corresponding credit approval result based on the final review result, and feeds the credit approval result back to the client terminal for display.
In this embodiment, the credit approval result includes the maximum resource scheduling limit acquired by the user. In one embodiment, the maximum resource scheduling amount refers to a credit amount obtained by the subscriber, i.e., the credit amount is the maximum loan amount that the subscriber can apply for from the third service.
Further, after the user applies for the successful resource scheduling, in order to avoid that the user forgets the returning time, and the like, a corresponding resource returning plan is correspondingly generated according to the resource scheduling information of the user to notify the user, specifically, referring to fig. 3, the multi-service method of the embodiment further includes:
s301, the first service end judges whether the current time is a preset resource returning plan generation time node or not based on the system time, if yes, the step S303 is executed, and if not, the judgment is continued.
In this embodiment, the system time is the system time of the first service end. Specifically, in order to save system energy consumption, a twelve-point morning resource release plan generation time node on the day of resource release is set in a unified manner, that is, when the first service end judges that the current twelve-point morning resource release plan is the twelve-point morning resource release plan, a corresponding resource release plan is generated based on the resource release information on the day. Of course, if there are multiple users receiving the corresponding resources on the same day, that is, the third server issues the resources to the multiple users on the same day, the first server will generate the corresponding resource returning plan in batch according to the payment information corresponding to each user and fed back by the third server through the second server, and feed back the plan to the client.
And S303, generating a corresponding resource returning plan based on the resource distribution information of the current day, and feeding the corresponding resource returning plan back to the client to display the resource returning plan to the user.
In this embodiment, the step of generating the resource return plan in step S303 includes: the first service end obtains the total number of the additional resources generated by applying for resource scheduling and the preset resource returning time node from the second service end, then calculates the total number of the resources to be returned of the user according to the resource distribution information and the total number of the additional resources, and finally generates a corresponding resource returning plan based on the total number of the resources to be returned obtained by calculation and the resource returning time node. Wherein, the total number of the additional resources comprises a first additional resource number, such as interest, calculated by the third server and a second additional resource number, such as premium, calculated by the second server; and the resource returning time node is acquired by the second server from the third server. Further, if there is a fourth service party providing a fourth service (i.e., a guarantee) to the user, then the total additional resources correspondingly includes a guarantee fee sent by the fourth service party to the second service party.
In a specific embodiment, if a staged return mode is selected from the resource scheduling application information filled when the user applies for resource scheduling, the resource return plan correspondingly includes the amount of the resource to be returned by the user in each period, a return time node and the like; if the user selects the non-staged returning mode when applying for resource scheduling, correspondingly, the resource returning plan comprises all the amount of the resources to be returned and the returning time node of the user. Of course, still further, when the first service end sends a corresponding returning reminding notification to the client end before the returnable time node, the user is prompted to return the current amount of the resource to be returned/all the amount of the resource to be returned.
Further, after the user receives the resource returning plan, the returning operation is usually performed in advance at or before the returning time node, and accordingly, referring to fig. 4, the multiparty service method of this embodiment further includes:
s401, the first service side sends a first resource scheduling instruction to a third-party resource scheduling service side associated with the user based on a resource returning request sent by the client side and based on a service identifier of a second service side, so as to trigger the third-party resource scheduling service side associated with the user to schedule the current to-be-returned resource of the user to the second service side.
In this embodiment, the user may select a resource in the resource return page (the resource return page is obtained by the client side in response to the corresponding access operation of the user, obtaining the content of the resource return page from the first service side, and then parsing and rendering the content of the resource return page), return the current resource (for example, click a repayment icon/button in the resource return page), to trigger the client to obtain the content of the return information page from the server for the user to select a pre-added third-party resource scheduling mechanism (such as a deduction bank and a corresponding bank card for repayment)/add a third-party resource scheduling mechanism (such as a deduction bank and a corresponding bank card for repayment), and after the user selects/adds the corresponding third-party resource scheduling mechanism, the client generates the corresponding resource returning request and sends the request to the first service terminal. The resource returning request comprises a service identifier of a second service party and third-party resource scheduling mechanism information which are acquired in advance.
In a specific embodiment, the service identifier refers to a merchant number of the second service party, and accordingly, after the third-party resource scheduling service end corresponding to the third-party resource scheduling mechanism (i.e., a deduction bank) selected/added by the user receives the first resource scheduling request (e.g., a deduction request), the user schedules a corresponding amount of resources to an account pre-designated by the second service party in advance in an account (i.e., a bank card) established by the third-party resource scheduling mechanism.
In this embodiment, the resources to be returned include quota allocated resources (such as principal) and first additional resources (interest) which are directionally allocated to the third service party, and second additional resources (premium) which are commonly allocated by the first service party and the second service party. Of course, if the user selects the vouching service, the to-be-returned resource also includes the vouching fee distributed to the vouching institution accordingly.
Further, if the user does not pay at the return time node and before the return time node, the first service end can judge that the user is overdue according to the system time and generate a corresponding payment notification, and accordingly, the resources to be returned, which are returned by the user at this time, comprise a fourth additional resource which is directionally allocated to the first service party, namely overdue default fund, and a fifth additional resource which is shared by the first service party and the second service party, namely penalty which is generated by overdue besides quota allocation resource, the first additional resource and the second additional resource.
Further, although the user selects the staged return mode, the user may also apply for selecting the early clearing mode, accordingly, referring to fig. 5, the multiparty service method of this embodiment further includes:
s501, the first service side responds to the early clearing request sent by the client side, and sends a second resource scheduling instruction to a third-party resource scheduling service side associated with the user based on the service identifier of the second service side so as to trigger the third-party resource scheduling service side to schedule all resources to be returned of the user to the second service side.
In this embodiment, the user may select early clearing on the resource returning page, for example, by clicking an early clearing icon/button, the client is triggered to acquire corresponding returning information page content from the first server, so that the user may select a third-party resource scheduling mechanism to be added in advance/add a third-party resource scheduling mechanism to the returning information page, and after the user selects/adds the third-party resource scheduling mechanism, the client generates a corresponding early clearing request and sends the request to the first server. The early clearing request includes a service identifier of a second service party acquired in advance, and third-party resource scheduling mechanism information, such as third-party resource scheduling server information corresponding to the third-party resource scheduling mechanism.
In a specific embodiment, after the third-party resource scheduling service end corresponding to the third-party resource scheduling mechanism (i.e., the deduction bank) selected/added by the user receives the second resource scheduling request (e.g., the deduction request), the user transfers the corresponding amount of resources from the account (i.e., the bank card) previously established in the third-party resource scheduling mechanism by the user to the account previously designated by the second service party.
In this embodiment, the resources to be returned include quota allocation resources and first additional resources directionally allocated to the third service party, second additional resources shared by the first service party and the second service party, and third additional resources directionally allocated to the first service party, that is, the breach of contract is cleared in advance.
Furthermore, the user may pay a premium at some time, and at this time, the first service party is required to pay a premium, and accordingly, referring to fig. 6, the multiparty service method of this embodiment further includes:
s603, the first service side responds to a third resource scheduling instruction sent by the second service side and indicating that the first service side is invited to pay for the second additional resource, and sends a fourth resource scheduling instruction indicating that payment is confirmed to the third-party resource scheduling service side associated with the first service side, so that the third-party resource scheduling service side associated with the first service side schedules a corresponding amount of the second additional resource to the second service side, and triggers the second service side corresponding to the second service side to send a fifth resource scheduling instruction indicating that the resource is paid off to the third-party resource scheduling service side associated with the second service side, and the third-party resource scheduling service side associated with the second service side schedules a corresponding amount of the resource to the third service side at a preset scheduling time node.
In this embodiment, the second server may first determine whether the current overdue time of the user reaches a preset trigger condition according to the system time and the resource returning plan, that is, determine whether the overdue time of the user (i.e., the resource is not returned on time) reaches a preset expected time threshold, and if so, the second server determines to send a third resource scheduling instruction indicating that the first server is required to pay for the second additional resource. Specifically, the preset time-out threshold is expected 38 days, that is, when the time-out of the user reaches 38 days, the second service side requests the first service side to pay a premium for the user through the second service side. Of course, the expiration time threshold may be adjusted according to the actual needs of the second server.
In this embodiment, after receiving the third resource scheduling instruction, the first service sends a fourth resource scheduling instruction indicating that the payment of the second additional resource is confirmed to the third-party resource scheduling service associated with the first service, so as to prompt a bank pre-designated by the first service to deduct a corresponding amount of resources from a corresponding bank account to the second service on the 39 th overdue day; accordingly, upon receipt of the second additional resource (the second additional resource including a second additional resource before the return time node and a second additional resource 15% between the claims date of the return time node, wherein 85% of the second additional resource before the return time node is to be dispatched to the first server corresponding to the first server, and 30% of the 85% is to be settled in two weeks, 55% is not to be involved in claims, and the first three months are counted each month), the second server sends a fifth resource dispatching instruction representing a corresponding amount of resources to be paid to the third server to a third-party server terminal associated with the second server (i.e., a bank server corresponding to a bank account corresponding to the second server) through the second server, and the third-party server terminal associated with the second server (namely, the bank server corresponding to the bank account corresponding to the second server) schedules the corresponding amount of resources to the third server at the preset scheduling time node. In one embodiment, the predetermined scheduled time node is the second day that the first service provider paid the second additional resources.
Based on the same inventive concept as the multi-service-party-based resource scheduling method in the foregoing embodiment, the present invention further provides another multi-service-party-based resource scheduling method, which is used for solving the problem in the prior art that a high-concentration resource is wasted due to an unreasonable resource configuration/scheduling manner, and in order to solve the above problem, the general idea of the present invention is as follows: the second server calculates the quantity of the resources distributed by each resource receiver and generates corresponding resource distribution details based on the resources to be distributed currently returned by the user and a preset resource distribution rule; the resource allocation rule comprises service identifications of a plurality of resource receivers receiving the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes; the second server corresponds to a second server side providing a second business service for the user; and the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to perform resource scheduling on the resources to be allocated. The technical solution of the present invention will be described and explained in detail by specific examples.
In this embodiment, the multiple service parties involved in the resource scheduling method include a first service party, a second service party, and a third service party in the above embodiment, and the functions of the service terminals corresponding to the respective service parties are the same, except that in the above embodiment, the processes of scheduling the resource from the third service party to the user and returning the resource to the user are described from the perspective of the first service terminal, and in this embodiment, the resource scheduling process between the service terminals is described after returning the resource by the user from the perspective of the second service terminal.
Referring to fig. 7, another resource scheduling method based on multiple service parties of the present invention includes:
and S701, the second server calculates the quantity of the resources distributed by each resource receiver based on the resources to be distributed currently returned by the user and a preset resource distribution rule, and generates a corresponding resource distribution detail.
In this embodiment, the resource allocation rule includes service identifiers of a plurality of resource receivers that receive the resource to be allocated, and the resource allocation quantity/percentage of each resource receiver in different return scenarios. Specifically, the plurality of resource receivers include a first service party providing a first service to a user, a second service party providing a second service to the user, and a third service party providing a third service to the user.
In this embodiment, if the returning scenario is normal returning, that is, the user returns at or before the returning time node in the resource returning plan, the resource to be allocated includes a quota allocation resource and a first additional resource directionally allocated to the third service party, and a second additional resource shared by the first service party and the second service party. Specifically, the first predetermined percentage is 30%, the second predetermined percentage is 15%, that is, 30% of the second additional resource is to be allocated to the first service party, 15% is to be allocated to the second service party, 55% is to be paid, and when there is still remaining after the paying, the first service party and the second service party are each allocated to 50% of the remaining portion.
In this embodiment, if the returning scenario is overdue returning, that is, the user returns after the returning time node in the resource returning plan, the resource to be allocated includes, in addition to the quota allocation resource, the first additional resource and the second additional resource, a fourth additional resource directionally allocated to the first service party, and a fifth additional resource shared by the first service party and the second service party. Specifically, the third predetermined percentage is 50%, that is, the first service party and the second service party will respectively allocate 50% of the fifth additional resource.
In this embodiment, if the returning scenario is cleared in advance, that is, the user clears all the resources before the returning time node in the resource returning plan, the resource to be allocated includes, in addition to the quota allocation resource, the first additional resource and the second additional resource, a third additional resource directionally allocated to the first service party.
In this embodiment, the step of calculating the number of resources allocated by each resource receiver in step S701 specifically includes:
identifying a current return scene of the user;
if the current return scene of the user is normal return, using the quota allocation resource and the first additional resource as resources allocated to a third service party, and respectively calculating the quantity of the resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage;
if the current returning scene of the user is identified to be returning after expiration, quota allocation resources and first additional resources are used as resources allocated to a third service party, fourth additional resources are used as resources allocated to the first service party, the number of the resources allocated to the first service party and the number of the resources allocated to the second service party are respectively calculated based on the second additional resources, the first preset percentage and the second preset percentage, and the number of the resources allocated to the first service party and the second service party are respectively calculated based on the fifth additional resources and the third preset percentage;
and if the current return scene of the user is identified to be cleared in advance, using the quota allocation resource and the first additional resource as the resource allocated to the third service party, simultaneously using the third additional resource as the resource allocated to the first service party, and respectively calculating the quantity of the resource allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage.
And S702, the second server sends the resource allocation details to a third-party resource scheduling server associated with the second server, so as to trigger the third-party resource scheduling server to perform resource allocation on the resource to be allocated.
In a specific embodiment, the third-party resource scheduling server associated with the second service party sets a bank server of a bank account for the second service party, and correspondingly, after the third-party resource scheduling server receives the resource allocation details sent by the second service party, the resources to be allocated received in the bank account corresponding to the second service party are allocated according to the resource allocation details, that is, the resources with the corresponding amount are respectively transferred to the first service party and the third service party.
Of course, if the user also uses the fourth service, i.e. the guaranteed service, then the to-be-returned resource also includes the guaranteed fee directionally allocated to the fourth service party.
Based on the same inventive concept as the multi-service-based resource scheduling method in the foregoing embodiments, the present invention further provides a server, on which a computer program is stored, which when executed by a processor implements the steps of any one of the foregoing multi-party service methods.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the methods of the present invention. For details not disclosed in the embodiment of the apparatus of the present embodiment, please refer to the embodiment of the method disclosed herein.
As shown in fig. 8, the present embodiment provides a server, where the server corresponds to a first server that provides a first service to a user, and specifically, the server includes:
a first response module 81, configured to send, in response to a resource scheduling request sent by a client, first service protocol page content to the client for presentation to a user; the first service protocol page content is obtained by the first response module 61 from the second server in advance; the second server corresponds to a second server providing a second service for the user;
a second response module 82, configured to, in response to an operation instruction sent by the client and indicating that the first service protocol is confirmed, push resource scheduling application information pre-entered by the user to the second server for risk assessment to generate intention information indicating that a service is accepted, and trigger the second server to apply for resource scheduling to a third server based on the intention information; the third server corresponds to a third server providing a third service for the user;
the first data obtaining module 83 is configured to send a resource scheduling result query request to the second server, so as to obtain a resource scheduling result fed back from the third server, and feed back the resource scheduling result to the client for display.
Further, as is well known, before applying for resource scheduling, a user needs to apply for a trust approval, and accordingly, the server of this embodiment further includes:
a third response module 84, configured to, in response to the credit granting request sent by the client, generate a first information page content for the user to enter user information, and feed back the first information page content to the client to be displayed to the user;
a fourth response module 85, configured to send, in response to an operation instruction sent by the client and indicating that the user information is submitted, the user information to the third server through the second server to initiate credit evaluation; receiving a credit evaluation result returned by the third server through the second server; the credit assessment result comprises credit assessment scores obtained by calculating credit assessment reports of the users and the user information by adopting a preset credit assessment model;
and the credit approval module 86 is configured to perform credit approval based on the credit evaluation result, send the credit approval result to the third server through the second server for final approval, receive a final approval result returned by the third server through the second server, generate a corresponding credit approval result based on the final approval result, and feed the credit approval result back to the client for display, where the credit approval result includes a maximum resource scheduling limit acquired by the user.
Further, if the user is the decoupling user, before the user applies for resource scheduling, credit evaluation needs to be performed again, and certainly, if the user is not the decoupling user, credit evaluation does not need to be performed again, therefore, in this embodiment, before the fourth response module performs credit evaluation on the user, it is further needed to identify whether the user is the decoupling user, and accordingly, the server of this embodiment further includes:
the second data acquisition module is used for acquiring the credit evaluation record of the user before the second response module triggers the second server to carry out risk evaluation;
the user judging module is used for judging whether the user is a decoupling user according to the credit evaluation record, triggering the fourth response module to initiate credit evaluation to a third server through the second server when the user is judged to be the decoupling user, and performing credit approval again based on a credit evaluation result; the decoupling users are users with the latest credit investigation time exceeding three months.
Further, after the user applies for successful resource scheduling, information of the scheduled resources, such as time node returning, resource quantity returning, and the like, needs to be fed back to the user, that is, a resource returning plan needs to be fed back to the user, and accordingly, the server of this embodiment further includes:
the return plan module is used for judging whether a time node is generated for the resource return plan or not at present based on the system time, if so, generating a corresponding resource return plan based on the resource scheduling information applied by the user on the same day and the user information of the user, and feeding the resource return plan back to the client; specifically, the return planning module 69 includes: an additional resource obtaining unit, configured to obtain, from the second server, a total number of additional resources generated by applying for resource scheduling and a preset resource returning time node; a calculating unit, configured to calculate the total number of resources to be returned to the user according to the resource scheduling information applied by the user and the acquired total number of additional resources; the plan feedback unit is used for generating a resource returning plan based on the total number of the resources to be returned and the resource returning time node and feeding the resource returning plan back to the client; the total number of the additional resources includes a first additional resource number calculated by the third server and a second additional resource number calculated by the second server.
Further, generally, a user returns a corresponding resource quantity according to a resource return plan, specifically, the user first accesses a resource return page through a client, and then triggers a corresponding resource return request on the resource return page, and accordingly, the server of this embodiment further includes:
and the fifth response module is used for responding to the resource returning request sent by the client, and sending a first resource scheduling instruction to a third-party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third-party resource scheduling server to schedule the current to-be-returned resource of the user to the second server.
Further, sometimes, a user does not return a resource according to a resource return plan, but applies for clearing in advance, specifically, the user first accesses a resource return page through a client, and then triggers a second resource scheduling request indicating clearing in advance on the resource return page, and accordingly, the server of this embodiment further includes:
and the sixth response module is used for responding to the resource early clearing request sent by the client, and sending a second resource scheduling instruction to a third-party resource scheduling server associated with the user based on the service identifier of the second server so as to trigger the third-party resource scheduling server to schedule the current to-be-returned resource of the user to the second server.
Further, the server of this embodiment further includes:
a seventh response module, configured to send, in response to a third resource scheduling instruction sent by the second server and indicating that the first server is invited to pay for the second additional resource, a fourth resource scheduling instruction indicating that payment is confirmed to the third-party resource scheduling server associated with the first server, so that the third-party resource scheduling server associated with the first server schedules a corresponding amount of the second additional resource to the second server, so as to trigger the second server to send a fifth resource scheduling instruction to the third-party resource scheduling server associated with the second server, so that the third-party resource scheduling server associated with the second server schedules a corresponding amount of the resource to the third server at a preset scheduling time node; the payment request is generated when the second server judges that the current overdue time of the user reaches a preset trigger condition based on the system time and the resource returning plan, and the payment request comprises the quantity of second additional resources requesting the first server to replace the payment.
Based on the same inventive concept as the other multi-service-based resource scheduling method in the foregoing embodiment, the present invention further provides another service end, on which a computer program is stored, which when executed by a processor implements the steps of any one of the methods of the other multi-party service method described above.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the methods of the present invention. For details not disclosed in the embodiment of the apparatus of the present embodiment, please refer to the embodiment of the method disclosed herein.
Referring to fig. 9, the server of this embodiment is a second server corresponding to providing a second service to a user, and specifically, the server includes:
the first calculation module is used for calculating the quantity of resources to be allocated by each resource receiver based on the current returned resources to be allocated of the user and a preset resource allocation rule, and generating corresponding resource allocation details;
and the resource allocation module is used for sending the resource allocation details to a third-party resource scheduling server associated with the second server so as to trigger the third-party resource scheduling server to perform resource allocation on the resources to be allocated.
In this embodiment, the resource allocation rule includes service identifiers of a plurality of resource receivers that receive resources to be allocated, and an allocation number or percentage of each resource receiver in different return scenarios; the plurality of resource receivers comprise a first service party, a second service party and a third service party, wherein the first service party receives the fixed allocation resources and the first additional resources and provides the first service for the user, and the third service party provides the third service for the user.
In this embodiment, the first computing module specifically includes: a return scene recognition unit for recognizing the current return scene of the user; specifically, the returning scenario identification unit may identify, based on a user initiating a corresponding request, that if the returning scenario is a normal returning scenario, the resources to be allocated currently returned by the user include quota allocation resources (such as principal) and first additional resources (such as interest) directionally allocated to the third service party, and second additional resources (such as premium) shared by the first service party and the second service party; if the resource is cleared in advance, the resource to be allocated currently returned by the user comprises a quota allocation resource, a first additional resource, a second additional resource and a third additional resource (such as clearing default money in advance) which is directionally allocated to the first service party; if the resource to be allocated returned by the user is returned after overdue, the resource to be allocated currently returned by the user comprises a quota allocation resource, a first additional resource, a second additional resource and a fourth additional resource (such as overdue default money); and a fifth additional resource (e.g., penalty) commonly distributed by the first and second parties; a first calculating unit, configured to, when the return scene identifying unit identifies that the return scene is currently a normal return, take the quota allocation resource and the first additional resource as resources allocated to a third service party, and calculate the number of resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage, and the second preset percentage, respectively; a second calculating unit, configured to, when the angelica return scene identification unit identifies that the current return is after-expiration, take the quota allocation resource and the first additional resource as resources allocated to a third service party, calculate the number of resources allocated to the first service party and the second service party based on the second additional resource, the first preset percentage, and the second preset percentage, respectively, take the fourth additional resource as a resource directionally allocated to the first service party, and calculate the number of resources allocated to the first service party and the second service party based on the fifth additional resource and the third preset percentage, respectively; and the third calculating unit is used for taking the quota allocation resource and the first additional resource as the resource allocated to the third service party when the angelica return scene identifying unit identifies that the current situation is the early clearing, respectively calculating the quantity of the resource allocated to the first service party and the second service party based on the second additional resource, the first preset percentage and the second preset percentage, and taking the third additional resource as the resource directionally allocated to the first service party.
The third embodiment of the present specification further provides an electronic device, which includes a memory 1002, a processor 1001, and a computer program stored on the memory 1002 and executable on the processor 1001, and the processor 301 implements the steps of the method when executing the program. For convenience of explanation, only the parts related to the embodiments of the present specification are shown, and specific technical details are not disclosed, so that reference is made to the method parts of the embodiments of the present specification. The server may be a server device formed by various electronic devices, a PC computer, a network cloud server, or a server function provided on any electronic device such as a mobile phone, a tablet computer, a PDA (Personal digital assistant), a POS (Point of Sales), a vehicle-mounted computer, or a desktop computer.
In particular, the server shown in fig. 10 in connection with the solution provided by the embodiments of the present description constitutes a block diagram, and the bus 1000 may comprise any number of interconnected buses and bridges linking together various circuits including one or more processors represented by a processor 1001 and a memory represented by a memory 1002. The bus 1000 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A communication interface 1003 provides an interface between bus 1000 and receiver and/or transmitter 1004, and receiver and/or transmitter 1004 may be a separate stand-alone receiver or transmitter or may be the same element, such as a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1001 is responsible for managing the bus 1000 and general processing, and the memory 1002 may be used for storing data used by the processor 1001 in performing operations.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a computer-readable storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, or a network device, etc.) to execute the above method according to the embodiments of the present disclosure.
The computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The computer readable medium carries one or more programs which, when executed by a device, cause the computer readable medium to perform the functions of:
the method comprises the steps that a first service end responds to a resource scheduling request sent by a client end, and sends first service protocol page content to the client end to be displayed to a user; the first service protocol page content is acquired by the first server from a second server; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
the first service end responds to an operation instruction which is sent by the client and indicates that a first service protocol is confirmed, pushes user information of the user and pre-entered resource scheduling application information to the second service end for risk assessment, generates intention information indicating carrying of a service, and triggers the second service end to apply for resource scheduling to a third service end based on the intention information; the third server corresponds to a third server providing a third business service for the user;
the first server sends a resource scheduling result query request to the second server to obtain a resource scheduling result fed back from the third server, and the resource scheduling result is fed back to the client for display; alternatively, the following functions are implemented:
the second server calculates the quantity of the resources distributed by each resource receiver and generates corresponding resource distribution details based on the resources to be distributed currently returned by the user and a preset resource distribution rule; the resource allocation rule comprises service identifications of a plurality of resource receivers receiving the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes; the second server corresponds to a second server side providing a second business service for the user;
and the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to perform resource scheduling on the resources to be allocated.
Those skilled in the art will appreciate that the modules described above may be distributed in the apparatus according to the description of the embodiments, or may be modified accordingly in one or more apparatuses unique from the embodiments. The modules of the above embodiments may be combined into one module, or further split into multiple sub-modules.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a mobile terminal, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
While preferred embodiments of the present specification have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all changes and modifications that fall within the scope of the specification.
Exemplary embodiments of the present disclosure are specifically illustrated and described above. It is to be understood that the present disclosure is not limited to the precise arrangements, instrumentalities, or instrumentalities described herein; on the contrary, the disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. In addition, the structures, the proportions, the sizes, and the like shown in the drawings of the present specification are only used for matching with the contents disclosed in the specification, so as to be understood and read by those skilled in the art, and are not used for limiting the limit conditions which the present disclosure can implement, so that the present disclosure has no technical essence, and any modification of the structures, the change of the proportion relation, or the adjustment of the sizes, should still fall within the scope which the technical contents disclosed in the present disclosure can cover without affecting the technical effects which the present disclosure can produce and the purposes which can be achieved. In addition, the terms "above", "first", "second" and "a" as used in the present specification are for the sake of clarity only, and are not intended to limit the scope of the present disclosure, and changes or modifications of the relative relationship may be made without substantial changes in the technical content.

Claims (10)

1. A multi-server based resource scheduling method, comprising:
the method comprises the steps that a first service end responds to a resource scheduling request sent by a client end, and sends first service protocol page content to the client end to be displayed to a user; the first service protocol page content is acquired by the first server from a second server; the first service end corresponds to a first service party providing first service for the user, and the second service end corresponds to a second service party providing second service for the user;
the first service end responds to an operation instruction which is sent by the client and indicates that a first service protocol is confirmed, resource scheduling application information which is input by a user in advance is pushed to the second service end for risk assessment, intention information which indicates that a service is accepted is generated, and the second service end is triggered to apply for resource scheduling to a third service end on the basis of the intention information; the third server corresponds to a third server providing a third business service for the user;
and the first server sends a resource scheduling result query request to the second server to obtain a resource scheduling result fed back from the third server, and feeds the resource scheduling result back to the client for display.
2. The method for scheduling resources according to claim 1, wherein the step of responding to the resource scheduling request is preceded by the step of:
the first service terminal responds to a credit granting request sent by the client terminal, generates first information page content which can be used for the user to input user information, and feeds the first information page content back to the client terminal to be displayed to the user;
the first service end responds to an operation instruction which is sent by the client and represents that the user information is submitted, sends the user information to the third service end through the second service end to initiate credit evaluation, and receives a credit evaluation result returned by the third service end through the second service end; the credit assessment result comprises a credit assessment report of the user and/or a credit assessment score obtained by calculating the credit assessment report and the user information by adopting a preset credit assessment model;
the first service end carries out credit approval based on the credit evaluation result, sends the credit approval result to the third service end for final approval through the second service end, and receives a final approval result returned by the third service end through the second service end;
and the first service terminal generates a corresponding credit approval result based on the final review result, and feeds the credit approval result back to the client terminal for displaying, wherein the credit approval result comprises the maximum resource scheduling limit acquired by the user.
3. The method for scheduling resources according to claim 1 or 2, further comprising:
and the first service terminal judges whether a time node is generated for the resource returning plan or not at present based on the system time, and if so, generates a corresponding resource returning plan based on the resource distribution information of the current day and feeds the corresponding resource returning plan back to the client terminal for displaying.
4. The method according to any one of claims 1 to 3, wherein the step of generating the resource return plan specifically includes:
the first service end obtains the total number of the additional resources generated by applying for resource scheduling and a preset resource returning time node from the second service end;
the first server calculates the total number of resources to be returned of the user according to the resource distribution information and the total number of the additional resources, and generates the resource returning plan based on the total number of the resources to be returned and the resource returning time node;
the total number of the additional resources comprises a first additional resource number obtained by calculation of the third server and a second additional resource number obtained by calculation of the second server; or,
the first service end responds to a resource returning request sent by the client end, and sends a first resource scheduling instruction to a third-party resource scheduling service end associated with the user based on the service identifier of the second service party so as to trigger the third-party resource scheduling service end to schedule the current resource to be returned of the user to the second service party; or,
and the first service end responds to a clearing-in-advance request sent by the client, and sends a second resource scheduling instruction to a third-party resource scheduling service end associated with the user based on the service identifier of the second service party so as to trigger the third-party resource scheduling service end to schedule all resources to be returned of the user to the second service party.
5. The method for scheduling resources according to any one of claims 1 to 4, further comprising:
the first service side responds to a third resource scheduling instruction which is sent by the second service side and indicates that the first service side is required to pay second additional resources, and sends a fourth resource scheduling instruction which indicates that payment is confirmed to a third-party resource scheduling service side associated with the first service side, so that the third-party resource scheduling service side associated with the first service side schedules a corresponding amount of second additional resources to the second service side, and the second service side is triggered to send a fifth resource scheduling instruction which indicates resources to be paid to the third-party resource scheduling service side associated with the second service side, and the third-party resource scheduling service side associated with the second service side schedules a corresponding amount of resources to the third service side at a preset scheduling time node.
6. A resource scheduling method based on multiple service parties is characterized by comprising the following steps:
the second server calculates the quantity of the resources distributed by each resource receiver and generates corresponding resource distribution details based on the resources to be distributed currently returned by the user and a preset resource distribution rule; the resource allocation rule comprises service identifications of a plurality of resource receivers receiving the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes; the second server corresponds to a second server side providing a second business service for the user;
the second server side sends the resource allocation details to a third-party resource scheduling server side associated with the second server side so as to trigger the third-party resource scheduling server side to perform resource scheduling on the resources to be allocated;
wherein the plurality of resource recipients comprises: a first server providing a first business service to the user; a third server for providing a third service to the user; and the second server.
7. A server, comprising:
the first response module is used for responding to a resource scheduling request sent by a client and sending first service protocol page content to the client to be displayed to a user; the first service protocol page content is obtained by the first server from a second server in advance; the first service end corresponds to a first service party providing first business service for the user, and the second service end corresponds to a second service party providing second business service for the user;
a second response module, configured to, in response to an operation instruction sent by the client and indicating that the first service protocol is confirmed, push resource scheduling application information pre-entered by the user to the second server for risk assessment to generate intention information indicating that a service is accepted, and trigger the second server to apply for resource scheduling to a third server based on the intention information; the third server corresponds to a third server providing a third business service for the user;
and the first data acquisition module is used for sending a resource scheduling result query request to the second server to acquire a resource scheduling result fed back from the third server, and feeding back the resource scheduling result to the client for display.
8. A server, comprising:
the first calculation module is used for calculating the quantity of resources to be allocated by each resource receiver based on the current returned resources to be allocated of the user and a preset resource allocation rule, and generating corresponding resource allocation details; the resource allocation rule comprises service identifications of a plurality of resource receivers receiving the resources to be allocated and the allocation quantity or percentage of each resource receiver under different returning scenes; the second service end corresponds to a second service party providing a second service for the user.
9. An electronic device comprising at least one processor, at least one memory, a communication interface, and a bus; wherein,
the processor, the memory and the communication interface complete mutual communication through the bus;
the memory for storing a program for performing the method of any one of claims 1 to 5 or claim 6;
the processor is configured to execute programs stored in the memory.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5 or 6.
CN202010319617.5A 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium Active CN111681092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010319617.5A CN111681092B (en) 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010319617.5A CN111681092B (en) 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111681092A true CN111681092A (en) 2020-09-18
CN111681092B CN111681092B (en) 2023-10-31

Family

ID=72451653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010319617.5A Active CN111681092B (en) 2020-04-22 2020-04-22 Resource scheduling method, server, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111681092B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131003A (en) * 2020-09-25 2020-12-25 中国建设银行股份有限公司 Resource allocation method, device and equipment
CN112561402A (en) * 2020-12-29 2021-03-26 平安银行股份有限公司 Resource security allocation method, computer device and storage medium
CN114091956A (en) * 2021-11-29 2022-02-25 中国平安财产保险股份有限公司 Service resource processing method and device, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109104471A (en) * 2018-07-26 2018-12-28 新疆玖富万卡信息技术有限公司 A kind of method of recommendation service, management server and recommendation server
CN109146659A (en) * 2017-06-16 2019-01-04 阿里巴巴集团控股有限公司 Resource allocation methods and device, system
CN110363666A (en) * 2018-04-11 2019-10-22 腾讯科技(深圳)有限公司 Information processing method, calculates equipment and storage medium at device
CN110912712A (en) * 2019-12-18 2020-03-24 东莞市大易产业链服务有限公司 Service operation risk authentication method and system based on block chain

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146659A (en) * 2017-06-16 2019-01-04 阿里巴巴集团控股有限公司 Resource allocation methods and device, system
CN110363666A (en) * 2018-04-11 2019-10-22 腾讯科技(深圳)有限公司 Information processing method, calculates equipment and storage medium at device
CN109104471A (en) * 2018-07-26 2018-12-28 新疆玖富万卡信息技术有限公司 A kind of method of recommendation service, management server and recommendation server
CN110912712A (en) * 2019-12-18 2020-03-24 东莞市大易产业链服务有限公司 Service operation risk authentication method and system based on block chain

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112131003A (en) * 2020-09-25 2020-12-25 中国建设银行股份有限公司 Resource allocation method, device and equipment
CN112561402A (en) * 2020-12-29 2021-03-26 平安银行股份有限公司 Resource security allocation method, computer device and storage medium
CN112561402B (en) * 2020-12-29 2024-05-31 平安银行股份有限公司 Resource security allocation method, computer device and storage medium
CN114091956A (en) * 2021-11-29 2022-02-25 中国平安财产保险股份有限公司 Service resource processing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111681092B (en) 2023-10-31

Similar Documents

Publication Publication Date Title
CN110599323B (en) Resource processing method and processing equipment
US9589300B2 (en) Enhanced transaction resolution techniques
CN112258306B (en) Account information checking method, device, electronic equipment and storage medium
CN111681092B (en) Resource scheduling method, server, electronic equipment and storage medium
JP4831555B2 (en) Method and apparatus for counting securities brokerage services
CN112163846A (en) Payment scheme determination method, device and system based on block chain
CN111198987A (en) Page display method based on user level, device and system thereof, electronic equipment and storage medium
JP2019212231A (en) Information processing device, information processing method and program
CN114119089A (en) An internal funds transfer pricing system, method, apparatus and medium
CN111242576A (en) Method and device for processing request
US20100070406A1 (en) Integrated mortgage and real estate origination system
CN113763140B (en) Bidding method and related device
KR101444272B1 (en) Method, system and non-transitory computer-readable recording medium for supporting securities lending and borrowing transaction by using address book
US20180082363A1 (en) Online auction platform for invoice purchasing
US20220230256A1 (en) System and method for ledger analytics and application of digital tax stamps
JP2019125199A (en) Information processing device
CN112950358A (en) Traceable financial market transaction guarantee management method, device, equipment and medium
US20210125153A1 (en) Application data integration for automatic data categorizations
KR20130089963A (en) Method and system for internet real estate loan by using escrow service
CN112581255A (en) Method, apparatus, device and computer readable medium for processing loan
KR20190143445A (en) Server for Managing Systematic License Agreement, Systematic License Agreement Managing System and Method Using The Same
KR102218543B1 (en) System for intermediating real estate auction and method thereof
EP4156071A1 (en) Information providing device and method, program, and information processing terminal
KR102781087B1 (en) System for managing sales limit contract and method thereof
KR20200048401A (en) A method for providing transaction services of advertisement traffic networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant