CN120258997A - A fund registration and transfer method and system based on distributed transaction mechanism - Google Patents
A fund registration and transfer method and system based on distributed transaction mechanism Download PDFInfo
- Publication number
- CN120258997A CN120258997A CN202510741335.7A CN202510741335A CN120258997A CN 120258997 A CN120258997 A CN 120258997A CN 202510741335 A CN202510741335 A CN 202510741335A CN 120258997 A CN120258997 A CN 120258997A
- Authority
- CN
- China
- Prior art keywords
- transaction
- fund
- target
- investor
- fragment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)
Abstract
It is an object of embodiments of the present disclosure to provide a fund registration method and system based on a distributed transaction mechanism. The method comprises the steps of inquiring a global routing table according to an investor identifier to determine a corresponding target investor fragment, inquiring the global routing table according to a fund product code to determine a corresponding target fund product fragment, sending a pre-submitting request about the transaction to the target investor fragment and the target fund product fragment in parallel, sending a submitting command in batches when receiving a pre-submitting completion response asynchronously returned by the target investor fragment and the target fund product fragment, completing data change about the transaction in the target investor fragment and the target fund product fragment, and writing transaction running water of the transaction into a time sequence database. Embodiments of the present disclosure provide an efficient and reliable solution for highly concurrent, strongly consistent financial transaction processing in a fund registration crossing scenario.
Description
Technical Field
The disclosure is applicable to the field of information technology of finance, and particularly relates to a fund registration technology based on a distributed transaction mechanism.
Background
The conventional fund registration system generally adopts a traditional XA transaction processing scheme and a static data slicing strategy, and faces a remarkable bottleneck in a high-concurrency and high-heat transaction scene.
The main stream of distributed transaction schemes is represented by a two-phase Commit (2 PC), whose core idea is to concatenate the transaction participants through a coordinator to complete a pre-Commit phase (preparation) and a Commit phase (Commit). Although this scheme can guarantee strong consistency, its serialization processing mode (sending pre-commit requests and commit requests serially to each relevant partition) results in limited throughput, single coordinator architecture is prone to become a performance bottleneck, and long-held global locks (table level locks) can cause high frequency deadlock or timeout problems in high contention scenarios. For example, during a fund-in peak period, when multiple investors are robbing a hot fund at the same time, a traditional XA transaction may cause a lock wait timeout for a global lock of the hot fund, resulting in a large number of transaction failures, which are difficult to meet modern financial business requirements.
At the data slicing level, the prior art mostly adopts a static database splitting table strategy, for example, pre-distributing data storage positions according to the range of investors ID hash values or base codes initials. This approach, while enabling a horizontal extension of the foundation, lacks the ability to dynamically respond to traffic changes. When a certain fund becomes a burst of transaction amount suddenly due to market hot spot, the slicing of the fund becomes a performance bottleneck rapidly, and the traditional static slicing rule cannot transfer the fund to an independent resource pool rapidly. More seriously, the high load of hot spot shards may trigger a chain reaction, resulting in a steep increase in the processing delay of other common fund transactions on the same physical node. In addition, static shards often require a shutdown to migrate data while expanding, a process that may result in service unavailability or inconsistent data.
Disclosure of Invention
It is an object of embodiments of the present disclosure to provide a fund registration method and system based on a distributed transaction mechanism.
According to one aspect of the present disclosure, there is provided a fund registration method based on a distributed transaction mechanism, wherein the method comprises the steps of:
receiving a transaction request submitted by a user, wherein the transaction request comprises an investor identification, a foundation product code, a transaction type and a transaction amount/share;
Inquiring a global routing table according to the investor identification to determine a corresponding target investor fragment, and inquiring the global routing table according to the fund product code to determine a corresponding target fund product fragment, wherein the hot fund product is based on dynamic judgment and is stored by adopting independent fragments;
Transmitting a pre-submission request about the current transaction to the target investor fragment and the target foundation product fragment in parallel;
Distributing a thread from each independent thread pool in the target investor fragment and the target foundation product fragment to execute the transaction related to the transaction, wherein the transaction object row is locked and the reservation operation of balance/share change is executed;
When receiving a pre-submission completion response asynchronously returned by the target investor fragments and the target foundation product fragments, sending a submission instruction in batches;
Finishing data change about the transaction in the target investor fragment and the target foundation product fragment;
writing the transaction flow of the current transaction into a time sequence database.
There is also provided, in accordance with an aspect of the present disclosure, a fund registration system based on a distributed transaction mechanism, wherein the system comprises:
A gateway for receiving a transaction request submitted by a user, the transaction request including an investor identification, a fund product code, a transaction type, and a transaction amount/share;
The route decision module is used for inquiring a global route table according to the investor identification to determine a corresponding target investor fragment, inquiring the global route table according to the fund product code to determine a corresponding target fund product fragment, wherein the hot fund product is based on dynamic judgment and is stored by adopting independent fragments;
A coordinator for:
Transmitting a pre-submission request about the current transaction to the target investor fragment and the target foundation product fragment in parallel;
When receiving a pre-submission completion response returned by the target investor fragments and the target foundation product fragments, sending a submission instruction in batches;
Writing the transaction flow of the transaction into a time sequence database;
The target investor sharer is configured to:
in response to the pre-commit request, allocating a thread from its own independent thread pool to perform transactions pertaining to the transaction, including locking transaction object rows and performing reservation operations for balance/share changes;
Responding to the submitting instruction to complete the data change about the transaction;
The target fund product is fragmented and used for:
In response to the pre-commit request, allocating a thread from its own independent thread pool to perform transactions relating to the transaction, including locking the fund product line and performing reservation operations for share variations;
and responding to the submitting instruction, and finishing the data change about the current transaction.
The embodiments of the present disclosure achieve significant technological breakthroughs in a fund registration transition scenario, providing an efficient and reliable solution for highly concurrent, strongly consistent financial transaction processing.
The dynamic slicing mechanism effectively solves the bottleneck of hot spot transaction. In the conventional static slicing strategy, the problem of data inclination caused by hot fund transaction often causes local performance collapse, and the embodiments of the present disclosure can automatically identify hot fund with sudden increase of transaction amount and migrate the hot fund to independent slicing in second-level time through real-time transaction flow monitoring and intelligent slicing scheduling. Meanwhile, double write verification and progressive route switching are adopted in the fragment migration process, so that strong data consistency is ensured not to be affected.
The enhanced transaction model breaks through the traditional contradiction between performance and consistency. By combining parallel pre-commitments with bulk commitments, embodiments of the present disclosure achieve processing efficiencies approaching final consistency schemes while maintaining strong consistency at the financial level. Compared with the serialization flow of the traditional two-stage commit (2 PC), the parallel pre-commit enables the delay of the cross-fragment transaction to be optimized from the linear superposition to the maximum value, and the processing time of a single transaction is greatly shortened. And the batch submission mechanism reduces the network overhead of the submission stage by merging the network instructions.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 illustrates a flow chart of a fund registration method based on a distributed transaction mechanism in accordance with an exemplary embodiment of the present disclosure.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
Specific embodiments of the disclosure will be further described below with reference to the accompanying drawings.
Before discussing the exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments of the present disclosure are described as an apparatus represented by a block diagram and a process or method represented by a flow chart. Although a flowchart depicts the operational procedure of embodiments of the present disclosure as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. Furthermore, the order of the operations may be rearranged. The process of embodiments of the present disclosure may be terminated when its operations are performed, but may also include additional steps not shown in the flow diagrams. The processes of the embodiments of the present disclosure may correspond to a method, a function, a procedure, a subroutine, etc.
The methods illustrated by the flowcharts and the apparatus illustrated by the block diagrams discussed below may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, and the like represent various processes which may be substantially described as program code stored in a computer readable medium and so executed by a computer device or processor, whether or not such computer device or processor is explicitly shown.
The term "storage medium" as used herein may represent one or more devices for storing data, including read-only memory (ROM), random-access memory (RAM), magnetic RAM, kernel memory, magnetic disk storage media, optical storage media, flash memory devices, and/or other machine-readable media for storing information. The term "computer-readable medium" can include, without being limited to, portable or fixed storage devices, optical storage devices, and various other mediums capable of storing and/or containing instructions and/or data.
A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program descriptions. One code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, information passing, token passing, network transmission, etc.
In this context, the term "computer device" refers to an electronic device capable of executing a predetermined process such as numerical computation and/or logic computation by executing a predetermined program or instruction, and may include at least a processor and a memory, where the execution of the predetermined process by the processor executes the program instruction pre-stored in the memory, or the execution of the predetermined process by hardware such as ASIC, FPGA, DSP, or a combination of both.
The "computer device" described above is typically embodied in the form of a general-purpose computer device, the components of which may include, but are not limited to, one or more processors or processing units, system memory. The system memory may include computer-readable media in the form of volatile memory, such as Random Access Memory (RAM) and/or cache memory. The "computer device" may further include other removable/non-removable, volatile/nonvolatile computer-readable storage media. The memory may include at least one computer program product having a set (e.g., at least one) of program modules configured to perform the functions and/or methods of the various embodiments of the present disclosure. The processor executes various functional applications and data processing by running programs stored in the memory.
For example, a memory stores a computer program for performing the functions and processes of the various embodiments of the present disclosure, which are implemented when a processor executes the corresponding computer program.
Typically, the computer device may be, for example, a user device or a network device, or even a collection of both. The user equipment comprises a Personal Computer (PC), a notebook computer, a mobile terminal and the like, wherein the mobile terminal comprises a smart phone, a tablet computer and the like, the network equipment comprises a single network server, a server group formed by a plurality of network servers or Cloud based on Cloud Computing (Cloud Computing), and the Cloud Computing is one of distributed Computing and is an super virtual computer formed by a group of loosely coupled computer sets. Wherein the computer devices may operate alone to implement embodiments of the present disclosure, may also access a network and implement embodiments of the present disclosure through interaction with other computer devices in the network. Wherein the network where the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user device, the network, etc. are only examples, and other existing or future computing devices or networks may be suitable for the embodiments of the present disclosure, and are also included in the scope of the present disclosure and incorporated herein by reference.
Specific structural and functional details disclosed herein are merely representative and are for purposes of describing example embodiments of the present disclosure. The embodiments of the present disclosure may be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
FIG. 1 illustrates a flow diagram of a fund registration method based on a distributed transaction mechanism, according to one embodiment of the present disclosure.
Referring to fig. 1, in step S1, a gateway receives a transaction request submitted by a user, wherein the transaction request comprises an investor identifier, a fund product identifier, a transaction type and a transaction amount/share of the user, in step S2, a routing decision module queries a global routing table according to the investor identifier to determine a corresponding target investor segment, queries the global routing table according to the fund product code to determine a corresponding target fund product segment, wherein the hot fund product is based on dynamic judgment and is stored by adopting independent segments, in step S3, a coordinator sends a pre-submitted request about the transaction in parallel to the target investor segment and the target fund product segment, in step S4, the target investor segment and the target fund product segment allocate a thread from respective independent thread pools thereof to execute the transaction about the transaction, the operations of locking a transaction object row and executing balance/share fluctuation, in step S5, the coordinator sends a pre-submitted command back to the target investor segment in response to the target fund segment in step S6 when receiving the target investor segment and the target fund product segment, and in step S6, and in turn, in response to the coordinator sends a transaction amount command about the transaction amount about the current transaction in the target product segment in the current store, and in step S6, and in response to the transaction command about the target product segment is changed.
The fund registration transit system of the present disclosure employs a dynamic data slicing strategy. In some embodiments, the investor ID hash library addresses high concurrent user pressures, product attributes isolate hot spots from common data, and time axis optimizes cold-hot separation and archiving.
Based on three dimensions of investors ID, product attributes and time axis, a multidimensional slicing strategy is adopted, data storage pressure is dispersed through different rules, high-frequency operation performance is optimized, and an exemplary database and table design is as follows:
Investors are separated into banks:
The name of the sub-warehouse is db_ investor _ { shard _id }, shard _id= investor _id, and 6-bit Modulo (MOD) 4096 is obtained, namely the sub-warehouse is obtained by 6-bit modulo 4096 according to the ID of an investor.
The single shard stores 1.2 thousands of active investors at most, and the shard fission is triggered after exceeding the single shard, for example shard _id=1024 fission is 1024_1 and 1024_2.
Fund product slicing:
1) Hot foundation products are independently sliced:
Independent sub-libraries are named db_ hotfund _ { fund _code }, e.g., db_ hotfund _F001.
And the storage structure is that each hot fund product monopolizes one slice and comprises tables of transaction flow, warehouse holding details and the like.
The TOP50 hot fund product is assigned to an independent segment (e.g., db_ hotfund _f001) based on a dynamic determination, e.g., dynamically monitoring the frequency of transactions.
2) Common fund product sub-table:
A list of the initial ranges of the foundation codes (such as t_ fund _a_f and t_ fund _g_m) is stored in t_ fund _a_f by starting with A-F according to the fund _code.
The sub-table is named t_ fund _ { range } (Range examples: a_f, g_m, n_z)
Transaction records are separated into libraries and tables:
the name of the sub-library is db_track_ { yyyyMMdd }. A database instance is built daily according to the transaction day sub-base (e.g., db_track_ 20231001).
The serial number sub-table is that one table is cut every 500 ten thousand transactions within the same transaction day, and the table is named as t_track_ { shard _seq } (such as t_track_1 to t_track_N).
In some embodiments, the global routing table is stored in Etcd distributed key-value store clusters, supporting millisecond level configuration updates.
The following mapping is maintained using Etcd clusters:
Investor shard mapping table: investor _id→db →u investor _ { shard _id };
The hot foundation product fragment mapping table comprises fund _code- & gt db_ hotfund _; dynamically updating according to dynamic adjustment of hot fund product fragments (such as TOP 50 list updating once per hour);
the common foundation product fragment mapping table is fund _code- & gt t_ fund _ { range };
transaction record sharding mapping table track_date→db_track_ { yyyyMMdd }.
In some embodiments, dual write verification and progressive routing switching are adopted in the process of fragmentation fission or migration, so that strong data consistency is not affected.
According to one example, shard fission is triggered when a single shard exceeds an upper capacity limit, such as an investor sharer exceeding 1.2 thousands of active investors.
When the number of active investors of a single investor shard reaches the preset capacity upper limit, the system automatically triggers the shard fission flow. The fission process firstly divides the hash value range which is responsible for the original shard into two continuous subintervals based on the topological structure of the consistent hash ring. For example, the hash range of the original slicing responsibility is 0x000 to 0xFFF, and the hash range is split into two subintervals of 0x000-0x7FF and 0x800-0xFFF, so that the continuity and the uniformity of data distribution are ensured. The newly created slave fragments are initialized to read-only copies, and a real-time data synchronization channel is established with the original fragments, so that the incremental data update is continuously received, and the newly written data are ensured to be kept synchronous between the two fragments during the fission period.
After the incremental data synchronously and stably operates, the system updates the configuration of the global routing table, and registers the hash range information of the new fragments to the metadata center. The routing table updating operation adopts a gray level release mechanism to gradually switch the request flow in the new segmentation range to the new segmentation, and meanwhile, the writing locking of the original segmentation to the new range is kept. When the flow switching is completed, the original slicing stops receiving the data writing request of the new range, and only the rest operation in the old range is processed. At the moment, the system starts a background asynchronous migration task, copies stock data belonging to a new fragment range in the original fragment to the new fragment in batches, and ensures the data integrity through a version number comparison and checksum mechanism in the migration process.
After the migration task is completed, the system executes final consistency check, compares the data difference of the two fragments and automatically restores the data difference. After verification, the new fragment releases the read-only mode and takes over all read-write flow, and the original fragment can be selectively reserved as read-only copy or off-line release resource according to the configuration strategy. The whole fission process realizes smooth capacity expansion without perception of service through a routing layer dynamic switching and data double-writing mechanism.
The split-type data migration method solves the technical problem that the conventional split capacity expansion needs to be stopped for migrating data by combining dynamic range division with incremental synchronization. The service request availability is maintained all the time due to the characteristics of the new and old fragments parallel service in the fission process. Based on the design of continuous division of the hash range, the problem of hot spot offset after data migration is effectively avoided, and the difference of two split loads after fission is effectively controlled. In addition, the coordination mechanism of asynchronous migration and real-time increment synchronization reduces the influence of the fission process on the system performance while ensuring the data consistency, thereby realizing the real on-line elastic expansion. The combination of the technical effects enables the system to rapidly complete horizontal capacity expansion without affecting service quality when coping with the explosive growth of traffic, and provides an innovative practice path for the design of a high-availability distributed architecture.
In some embodiments, when a common fund product is dynamically determined to be a hot fund product, data of the common fund product is migrated through double write and stored in separate pieces.
According to one example, the heat determination algorithm for the fund product is based on a sliding time window (e.g., 5 minutes) to count the frequency of transactions, in combination with a gradient threshold (e.g., top50 fund dynamically expanding to Top 60) to avoid frequent migration.
The system maintains a sliding time window of the ring buffer structure, the total duration of the window being 5 minutes, divided into 300 time slices, each time slice recording transaction events within 1 second. Transaction events are aggregated by fund code, and transaction counts for a current time slice are incremented per second to a sliding window counter for the corresponding fund. The count of expiration time slices (out of 5 minutes range) is removed from the total heat value, ensuring that the statistics reflect only the last 5 minutes of activity.
The system periodically (e.g., every minute) orders the heat values of all funds in descending order and performs a dynamic expansion decision to take the top 50 currently ranked funds as the candidate hot spot set. The candidate set is extended to the first 60 (gradient threshold 50→60) to form the final hot spot fund list. For example, if a foundation rank is 55 th and does not reach an independent fragmentation standard, the foundation rank is included in a capacity expansion range after gradient expansion, so that frequent migration triggered by short-term rank fluctuation is avoided. A cooling period (e.g., 15 minutes) is set for the hot spot funds that have migrated to the individual segments. During this period, even if the first 60 of the heat drops temporarily, the heat remains in the independent slices, preventing ineffective migration due to high frequency jitter.
And a progressive flow switching mechanism is adopted in the fragment migration process to ensure service continuity. After the new fragments are initialized to shadow copies, firstly, a full-volume data synchronous link is established with the source fragments, the stock data of the foundation products are migrated in the source fragments are completely copied, and meanwhile, the continuous updating of the source fragments is synchronously captured through real-time increment. When the data synchronous delay is stably lower than a set threshold value, a three-stage flow switching flow is entered, wherein 10% of request flow is directed to a new fragment in a gray period, a system executes double-writing operation between the new fragment and the old fragment and compares writing results, data consistency is verified through a checksum mechanism, all requests are switched to the new fragment in a full period, a source fragment is converted into a read-only mode as disaster recovery backup, the new fragment is continuously and reversely synchronously changed to the source fragment during the period so as to keep data redundancy, a background asynchronous task is started in a final period, residual data of the source fragment is transferred to the new fragment in batches, the association relation between the fragments is released after the transfer is completed, and source fragment resources are released.
Aiming at abnormal scenes in the migration process, a self-healing mechanism is built in the system to ensure service availability. If the data synchronization delay exceeds 30 seconds threshold, automatically triggering a route rollback strategy, redirecting the flow to the source fragment, suspending the migration flow, and pushing alarm information to the operation and maintenance system, if the new fragment has performance degradation during service, if the P99 value of the request delay exceeds 200 milliseconds, the intelligent scheduling module switches part of the flow back to the source fragment according to the preset proportion until the performance index of the new fragment is restored to the normal range. In the process of exception handling, the double-write check log and the increment synchronous pipeline provide data traceability and support accurate fault positioning and repairing during manual intervention.
The method solves the problem of frequent fragment migration caused by instantaneous fluctuation of transaction amount in the traditional heat judgment scheme through the collaborative design of sliding window statistics and gradient expansion. The gradient threshold mechanism enables the system to have elastic tolerance capability when identifying hotspots, remarkably reduces migration frequency, and simultaneously ensures isolation timeliness of real hotspot funds. The combination of shadow copy and progressive flow switching realizes transparency to service in the process of fragment migration, maintains service availability, and improves service degradation time by two orders of magnitude compared with 30 minutes of the traditional cold migration scheme.
The gradient expansion strategy improves the prejudging capability of the system on hidden hot spots while reducing the migration frequency. For example, when the trading volume of a foundation market is gradually increased but the first 50 foundation market is not entered, the gradient expansion brings the foundation market into the expansion range in advance, so that the system completes resource preparation before the trading volume bursts, burst flow impact is avoided, and core fragmentation overload can be effectively prevented.
Referring back to fig. 1, as shown in fig. 1, in step S1, the gateway receives a transaction request submitted by a user.
In some embodiments, the transaction request includes an investor identification, a fund product code, a transaction type, and a transaction amount/share. According to one example, the transaction type is fund reimbursement.
Next, in step S2, the routing decision module queries the global routing table according to the investor identification to determine a corresponding target investor segment, and queries the global routing table according to the fund code to determine a corresponding target fund product segment.
According to one example, the routing decision module first queries the hot fund product sharding map fund _code- > db_ hotfund _ according to the fund product code, and when there is no hit, queries the normal fund product sharding map fund _code- > t_ fund _ { range }, according to the fund product code, to determine the target fund product sharding corresponding to the transaction.
In step S3, the coordinator sends a pre-commit request for the current transaction in parallel to the target investor segment and the target foundation product segment.
The investor initiates a transaction request, and a routing decision module determines a target fragment to enter a distributed transaction flow. Distributed transactions refer to mechanisms that guarantee operation atomicity and consistency when slicing or servicing across multiple databases. A strong coherence transaction (XA protocol) is used for real-time transactions (e.g., redemption of purchases) in a fund registration pass scenario, ensuring that funds deduction and share changes are either fully successful or fully rolled back. For example, investor A buys Fund X, and needs to update both the Fund account (segment 1) and the Fund share (segment 2) at the same time, XA transactions ensure that both are completed simultaneously.
The coordinator sends the pre-commit request in parallel to all the splits involved in the transaction request (target investor split and target fund product split). The key to this stage is to parallelize the processing to reduce overall latency.
The coordinator sends pre-commit requests to both the target investor shards and the target fund product shards simultaneously through an asynchronous communication mechanism (such as CompletableFuture of Java or goroutine of Go), and sets a timeout mechanism, such as starting a timeout timer (e.g., 500 ms), to monitor the response status, preventing the single shard from responding too slowly to block the overall flow.
According to one example, a coordinator in the present disclosure may employ a distributed coordinator cluster, such as using Etcd or a ZooKeeper cluster to manage transaction states, avoiding single point bottlenecks.
In step S4, the target investor partition allocates a thread from its independent thread pool to execute the transaction related to the transaction, including locking the transaction object row and executing the reservation operation of balance/share variation, and the target foundation product partition allocates a thread from its independent thread pool to execute the transaction related to the transaction, including locking the transaction object row and executing the reservation operation of balance/share variation.
In some embodiments, the target investor tile allocates a thread from its independent thread pool to perform transactions pertaining to the current transaction, including locking transaction object rows and performing reservation operations for balance/share changes. The target fund product fragment allocates a thread from its independent thread pool to perform transactions related to the current transaction, including locking the fund product line and performing reservation operations for share variations.
Here, each data slice maintains an independent thread pool, avoiding global resource contention. For example, the investor slices and thread pools of foundation product slices do not interact, and a single slice failure in an extreme scenario does not result in global blocking.
Each data slice is configured with a dedicated thread resource pool for independently processing transaction requests assigned to that slice. The data slicing selects an idle thread from the local line Cheng Chizhong to handle a new transaction task. Each thread instance is responsible for the full life cycle of a single transaction throughout, including locking of data records, execution of reservation operations, and feedback of the resulting state. Multiple threads in the same segment can process transaction transactions of different investors or fund products in parallel, and efficient concurrency inside the segment is achieved.
In the investor slicing scenario, each thread in the thread pool locates a target account according to the received investor identification, and performs reservation operations of funds or shares after row level locking is applied. For example, when processing a request for purchase, the thread shifts the available balance of the investor account to a reserved state, ensuring that other concurrent transactions do not repeatedly deduct the same funds. For the foundation product sharding, thread pool pins ensure strong data consistency through row level locks. After the threads are processed, the operation result and the data version information are returned to the coordinator, and the coordinator releases the resource to return to the thread pool for standby.
The independent design of the thread pool of the slicing level enables the processing capacity of single slices to be dynamically adjusted according to the actual load. When the transaction amount of a certain fragment is increased, the processing throughput can be directly improved by expanding the size of the exclusive thread pool without global resource reallocation. For example, hot-fund slices can configure larger thread pools to cope with bursty traffic, while normal fund slices maintain a basic number of threads to reduce resource consumption. The complete isolation among the slicing thread pools fundamentally avoids the thread resource competition of the cross-slicing transaction and ensures the overall stability of the system.
The thread pool management mechanism remarkably improves the expansibility and the reliability of the distributed transaction system through the combination of resource specialization and processing parallelization. The intra-slice parallel processing enables the throughput of a single slice to linearly increase along with the number of threads, and meanwhile, due to the fault isolation characteristic of an independent thread pool, the abnormality of the single slice cannot be diffused to a global system.
According to one example, with respect to a buy transaction, a target investor tile allocates a thread from its independent thread pool to perform transactions with respect to the current transaction, including locking an account line and performing reservation operations for balance changes, and a target fund product tile allocates a thread from its independent thread pool to perform transactions with respect to the current transaction, including locking a fund product line and performing reservation operations for share changes.
In the fund buying transaction scenario, after the target investor fragments receive the pre-commit instruction, one thread is selected from a thread pool exclusive to the fragments to execute the transaction. The thread locates the corresponding row of account data based on the investor identification, locking the row to prevent other transactions from concurrently modifying account balances. After the locking is successful, the thread checks whether the available balance of the account meets the requirement of the buying amount. If the verification is passed, the thread transfers the buying amount from the available balance field to the reserved freezing field, for example, the original available balance of the account is 10 ten thousand yuan, after buying for 5 ten thousand yuan, the available balance is updated to 5 ten thousand yuan, and the reserved freezing amount is increased by 5 ten thousand yuan. This operation generates a pre-commit log locally on the shard, recording the transaction identification, account change details, and current transaction state for restoring data consistency upon subsequent commit or rollback. After the process is completed, the thread returns a pre-commit success response to the coordinator.
Synchronously, after receiving the pre-commit request, the target fund product fragment allocates threads from its independent thread pool to handle fund share operations. The thread locks the line of foundation product data, checking whether the currently available share meets the number of claims. For example, a total share of 100 ten thousand shares of a foundation is sold with 80 ten thousand shares, and a usable share of 20 ten thousand shares. If 10 ten thousand shares are purchased, the thread adjusts the available share to 10 ten thousand shares and transfers 10 ten thousand shares to the reserved frozen share field. This operation also generates a pre-commit log, recording the fund code, the share variation value, and the operation timestamp. Persistent storage of the pre-commit log ensures that transaction state can be reconstructed from the log even if the system fails. After the process is completed, the thread returns a pre-commit success response to the coordinator.
According to one example, with respect to redemption transactions, the target investor tile allocates a thread from its independent thread pool to perform transactions with respect to the current transaction, including locking hold share lines and performing share-varying reservation operations, and the target fund product tile allocates a thread from its independent thread pool to perform transactions with respect to the current transaction, including locking fund product lines and performing share-varying reservation operations.
In a fund redemption transaction scenario, the processing thread of the target investor shard first locks the particular fund share data line held by the investor. For example, an investor holds 5 tens of thousands of funds, and the thread verifies whether the available shares satisfy the redemption quantity. If 3 ten thousand shares are redeemed, the thread reduces the available shares from 5 ten thousand shares to 2 ten thousand shares while accounting 3 ten thousand shares into the reserved frozen share field, preventing other redemption requests from being repeatedly deducted. The pre-submitted log records the investor identification, the foundation code and the share change track in detail, and provides a traceability basis for subsequent operations.
The processing of the target fund product fragment for the redemption transaction depends on the fund type. For closed funds, the processing thread locks the line of funds product and reduces the total share by a corresponding amount. For example, 100 ten thousand of the original total share is redeemed for 3 ten thousand and updated to 97 ten thousand, which is recorded in a pre-submitted log and marked with the type of fund. The open foundation does not need to adjust the total share, but updates the net worth calculation basis. All operations ensure that the transaction can be precisely rolled back or continuously executed after being interrupted through the pre-commit log solidification step.
The design of independent thread pools within the slices allows for complete decoupling of investor account operations from fund product operations. Each thread only accesses the local sliced data in the processing process, and the parallel execution of multiple transactions in the sliced data is realized through row-level lock control. For example, when multiple request for purchase arrive at the investor slices at the same time, the thread pool can allocate different threads to process account locking and balance reservation of different investors, and processing delay caused by single thread blocking is avoided. This mechanism ensures transaction isolation and data integrity within a single slice while improving concurrent processing capabilities.
The core role of the pre-commit log is to build an atomicity guarantee for the transaction operation. And forming a complete operation chain by the transaction identifier, the data pre-change value, the data post-change value and the operation time stamp recorded in the log. And in the global commit stage, each fragment executes final data persistence according to the log content, and if rollback is required, the reserved field is restored to the original state according to the log. This mechanism allows the system to ensure final consistency of data through log playback in the face of node failure or network partition.
In some embodiments, the target investor shards are row-level locked with pessimistic locks, such as locks on transaction object rows (e.g., account rows, holding share rows).
The target fund product fragments then distinguish hot fund from normal fund. And if the target foundation product fragments are common foundation product fragments, the lock of the foundation product rows adopts optimistic locks based on version numbers.
In some embodiments, the coordinator sends a pre-commit request in parallel to the target investor shard and the target funds shard, accompanied by transaction details and lock type identification (hot/cold funds). Accordingly, the investor slices force the use of pessimistic locks (the fund operations must be strongly consistent), and the fund product slices dynamically select locking strategies according to the fund types, such as pessimistic locks for hot funds and optimistic locks + version numbers for normal funds.
During the process of the coordinator sending the pre-submitted request to the target fragment, the system dynamically selects a locking mechanism according to the type of funds involved in the transaction to ensure data consistency. When constructing a pre-submitted request message, the coordinator attaches a lock type identifier of the foundation product, wherein the investor fragments forcedly adopt a pessimistic lock strategy, and the foundation product fragments dynamically switch the lock type according to the identifier. For a hot-marked fund product, the coordinator explicitly requires the use of pessimistic locks for the fund fragments in the request, and for a normal fund product, an optimistic lock mechanism based on version numbers is enabled.
After the target investor slices receive the pre-submission request, a unified pessimistic lock flow is executed no matter what type of fund is involved. The processing thread locates the account data line based on the investor identification, and applies a line level lock to prevent other transactions from modifying account balances or holding shares. For example, in a buy transaction, after a thread locks an investor account line, the buy amount is transferred from the available balance to a reserved freeze field, and in a redemption transaction, the investor is locked in a share line and the redemption quantity is frozen. After the operation is completed, the thread records the numerical value, the time stamp and the transaction identifier before and after the account is changed in the local pre-submitted log.
The processing logic of the target foundation product fragment is dynamically adjusted according to the lock type identification. For hot funds, threads lock the line of foundation product data directly, performing deduction and reservation operations of the available shares. For example, when handling hot fund purges, the thread reduces the available shares and counts into a reserved freeze field, generating a pre-commit log that includes the fund code, the share change value, and the lock status. For the common foundation, the thread firstly reads the version number of the current data, and tries to check the version consistency during updating, namely deducting the available share and increasing the version number if the versions are matched, and returning an error triggering retry flow if the versions conflict. All operations record a pre-commit log with version number for transaction recovery and conflict detection.
In the scene that the target foundation product is divided into common foundation product, the optimistic locking mechanism based on version numbers is adopted for locking the foundation product rows. When the pre-submission request relates to a share change of a common foundation product, the worker thread first reads the total share data and associated version number of the current foundation product from the target foundation product fragment. The version number is an integer value bound with the data line of the foundation product, is initialized to zero when the data line is first created, and is incremented after each successful update. In the transaction processing process, the read version number is temporarily stored in the local memory and used as the basis for subsequent verification.
And after receiving the pre-commit instruction, the working thread executes an atomization comparison and exchange operation, namely, consistency comparison is carried out on the version number in the request and the version number in the current data line. And if the version numbers are inconsistent, judging that the data conflict exists, rejecting the update request and returning an error code. After the verification of the version number fails, the coordinator triggers a transaction rollback flow, releases the locked account resources of the investor, returns an operation failure prompt to the client, and recommends the user to initiate the transaction again.
For version number management, the system designs a dedicated field in the database table of the foundation product fragment to store version information, which is bound to the foundation product data line, and any update operation must explicitly specify the desired version number. And when the data is changed each time, the working thread automatically completes reading, checking and increasing of the version number in the transaction context, so that the atomicity of version state change and business operation is ensured. The mechanism avoids resource competition caused by explicit locking, and realizes concurrency control through lightweight version comparison.
The optimistic lock and version number mechanism are combined, and the advantages that the lock competition probability of cross-slice transactions is remarkably reduced by reducing the holding time of an explicit lock, so that the common product slices can still maintain stable processing throughput under a high concurrency scene, the atomising incremental characteristic of the version number ensures that a system can maintain strong data consistency through a quick rollback and retry mechanism even if concurrency conflicts occur, the problem of dirty reading or updating loss possibly occurring in a traditional optimistic lock scheme is avoided, and a dynamic slice routing strategy is combined, so that the common product slices and the hot product slices can differentially select a concurrency control model, and the utilization rate of the whole system resources is optimized. The design of the lock strategy for adapting differentiation aiming at different fragment types ensures the consistency of transactions, simultaneously realizes the cooperative promotion of the elasticity and the processing efficiency of the system, and is particularly suitable for business scenes with obvious fluctuation of the transaction amount of foundation products.
The two lock mechanisms complement each other in the transaction flow. Pessimistic locks ensure strong data consistency in hot-fund high-frequency transaction scenarios by forcing exclusive access, and avoid share overstock problems caused by concurrent modification. Optimistic locks reduce the processing overhead of normal foundation shards by lockless checking, allowing parallel transactions to be completed quickly without conflicts. The unified management of the pre-submitted log enables the two lock strategies to adopt the same data recovery mechanism in the rollback or submitting stage, and ensures the atomicity of the transaction.
The hybrid lock mechanism is used for intelligently adapting to concurrent requirements of different service scenes, so that the consistency of core financial data is ensured, and meanwhile, the throughput of the system is obviously improved. And the hybrid lock strategy ensures atomicity by adopting pessimistic lock for core fund operation, reduces competition by adopting optimistic lock for common fund fragments, enables the system to be adaptively adjusted when dealing with different conflict probability scenes, obviously reduces the lock conflict rate and improves the success rate of transactions.
Further, a multi-level automatic processing scheme is designed for lock timeout and deadlock problems. When a certain transaction fails to acquire the needed lock resources within a preset time, the system triggers a lock timeout protection flow, namely, the lock held by the transaction is automatically released, the data change of the pre-commit stage is rolled back, and a timeout error code is returned to the coordinator. For example, when an investor shard processing thread attempts to lock an account, if it is unsuccessful for 500 milliseconds, it determines that the lock has timed out, immediately terminates the current transaction and releases the resource. The timeout event is logged to a diagnostic log for subsequent root cause analysis.
For the deadlock scenario, the system periodically scans the lock waiting relationship of all active transactions to build a transaction waiting pattern model. When a closed loop wait path is detected, the transaction with the smallest rollback cost is automatically selected as the victim, e.g., the transaction that did not enter the pre-commit phase is preferentially terminated. After the victim is selected, the system rolls back all its operations, releases the associated lock resources, and informs the relevant shards to clear the transaction intermediate state through the coordinator. And the deadlock detection result and the processing decision are synchronously recorded to an audit log, so that traceability of operation is ensured.
At the automatic recovery level, the system provides an intelligent retry mechanism for terminated transactions. The coordinator evaluates the feasibility of retry according to the transaction type and the error cause, for example, the failure caused by lock contention can be delayed for a random time and resubmitted, and the error caused by data inconsistency is marked as needing manual intervention. In the retry process, the system preferentially allocates resources close to the physical position of the original fragments, and reduces the influence of network delay. All retry operations inherit the unique identity of the original transaction, avoiding duplicate execution or state confusion.
And by means of a layered autonomous exception handling mechanism, the self-repairing capability of the system under a complex concurrency scene is remarkably improved. The cooperation of lock timeout and deadlock detection ensures that the system shortens the service interruption time caused by abnormal transactions under the condition of no need of manual intervention. Particularly, the intelligent retry strategy accidentally reduces the risk of cascading failures of cross-fragment transactions, and the system effectively avoids the global avalanche effect caused by local abnormality by dynamically adjusting the retry interval and the path.
In step S5, the coordinator, upon receiving the pre-commit complete response asynchronously returned by the target investor shards and the target fund product shards, transmits a commit instruction in bulk. Next, in step S6, the target investor and the target foundation product fragment complete the data change concerning the current transaction.
After confirming that the target investor fragment and the fund product fragment of one transaction both return pre-submission successful responses, the coordinator merges transaction submission instructions of a plurality of transactions in the same batch into a batch request packet, and sends the submission instructions to relevant fragments of all transactions through single network communication. The instruction carries a transaction unique identifier and an operation type identifier, so that the fragments can be accurately matched with the reserved data changes in the pre-commit stage. For example, a batch contains 100 purchase transactions, which the coordinator combines into one package of commit instructions, and sends the package to the involved investor and fund product fragments in a unified way, rather than from one to another, thereby significantly reducing the number of network interactions.
And after receiving the commit instruction, the target fragment immediately retrieves the corresponding transaction record from the local pre-commit log, and verifies the consistency of the transaction state and the data version number. For investor shards, the processing thread transfers the amount or share frozen during the pre-commit phase from the reserved field to the final status field, e.g., actually deducts the frozen funds reserved in the purchase transaction from the account, and updates the total balance. For the fund product sharding, the thread marks the reserved frozen shares as sold and synchronously updates the fund total shares or net data. All change operations are completed within one atomic transaction, ensuring the inseparability of the commit phase. After the operation is completed, the temporary records in the pre-commit log are cleaned up by the fragments, associated lock resources are released, and a commit complete acknowledgement is sent to the coordinator.
In addition, the shards write data changes to the underlying storage through an asynchronous persistence mechanism. The writing process adopts a multi-copy confirmation strategy, the main sharding synchronizes the change to at least two copy nodes, and after most nodes confirm that the writing is successful, the transaction is marked as being finally effective. If a certain duplicate node responds to the timeout, the system automatically switches to the standby node to resynchronize, and the high availability of the data is ensured. For example, a fund share change needs to be synchronized to three storage nodes, when two nodes confirm writing, the writing is considered successful, and the exception of the third node does not affect the transaction completion status.
The batch submission and atomization execution mechanism remarkably improves the throughput efficiency of the system by reducing network interaction and resource competition. The batch instruction packaging strategy of the coordinator effectively reduces network transmission overhead, and intra-slice atomic operation guarantees data consistency across transactions. Particularly, the collaborative design of asynchronous persistence and multi-copy confirmation improves the writing efficiency, simultaneously unexpectedly enhances the capability of the system for coping with regional faults, and when part of storage nodes are unavailable, the system can still complete data solidification through the rest nodes, thereby improving the overall service availability. This feature is excellent in a cross-regional distributed deployment, providing a stable and reliable technical support for globalization of the fund transaction business.
In some embodiments, if any of the shard pre-commit times out or fails, the coordinator triggers a global rollback, all shards undo the reservation operation.
In the distributed transaction process, if any participating slice returns a timeout or failure response in the pre-commit stage, the coordinator immediately triggers the global rollback flow. The coordinator firstly sends a rollback instruction to all the fragments which return to the pre-commit success, and the instruction carries a transaction unique identifier and a rollback type mark, so that the fragments can be ensured to accurately identify the operation to be cancelled. For example, when an investor fragment pre-commit is successful and a foundation fragment fails to return due to insufficient shares, the coordinator sends a rollback instruction to the investor fragment asking for the withdrawal of the reserved frozen amount, while informing the foundation fragment to terminate the current transaction.
The target fragment, upon receiving the rollback instruction, immediately retrieves the detailed operation record for the transaction from the local pre-commit log. The processing thread reversely executes data recovery according to the reserved operation recorded in the log, namely, for the investor fragments, the reserved frozen amount or share is released to the available field again, and for the foundation fragments, the reserved frozen share is returned to the available share pool. All rollback operations are completed within one atomic transaction, ensuring that the data state is restored to the exact value before the transaction was initiated. For example, 5 ten thousand yuan frozen in the investor account pre-submission stage of a certain purchase transaction is added back to the available balance field when rolling back, and 10 ten thousand shares reserved for the fund product are synchronously released.
After the data recovery is completed by the fragments, the temporary records in the pre-commit log are cleaned, lock resources occupied in the transaction process are released, and rollback completion confirmation is sent to the coordinator. The coordinator monitors the rollback state of all the fragments, if a certain fragment does not respond in time due to network failure, an asynchronous retry mechanism is started, and rollback instructions are repeatedly sent according to an exponential backoff strategy until success. Meanwhile, the system records abnormal transaction information to an audit log and marks the abnormal transaction information as a to-be-processed item to be manually checked, so that extremely abnormal long-term retention which cannot be solved by an automatic retry mechanism is prevented.
The global rollback mechanism tracks and atomizes reverse operation through the transaction identification, and ensures that the cross-fragment data can be restored to a consistent state when any link fails. The unified scheduling of the coordinator is combined with the autonomous recovery of the fragments, so that the rollback process is transparent to the service, and the system can complete the cleaning of abnormal transactions in second-level time.
In step S7, the coordinator writes the transaction sequence of the current transaction into the time sequence database.
According to one example, the coordinator writes the transaction details (time, amount, share, participants) of the current transaction to a time-ordered database (e.g., influxDB), supporting high frequency writing and real-time querying.
In some embodiments, the fund registration system also performs real-time reconciliation.
According to one example, a Binlog listening tool captures data change events, pushes the data change events to a Kafka cluster in real time, a reconciliation service consumes the data change events in the Kafka cluster, and compares fund shares in a business system with funds ledgers in a accounting system;
Wherein, the check rule is:
Σ personal share = product total share;
Funds change = aggregate sum of transactions.
The fund registration system ensures consistency of business data and financial data through a real-time reconciliation mechanism. The system deploys a database log monitoring component to capture data change events of investor slices and foundation slices in real time. Each time an account funds change or a fund share is adjusted, the monitoring component extracts the values before and after the change, the operation time and the transaction identification, and encapsulates the values, the operation time and the transaction identification into a structured event message. For example, investor A buys a fund for 5 ten thousand yuan, a monitoring component captures two association events of 5 ten thousand yuan for account deduction and 5 ten thousand shares for fund share increase, and the same transaction identifier is added to mark operation association.
Event messages are pushed to a distributed message queue cluster for temporary storage in real time, and the message queue adopts partition order assurance to ensure that a plurality of events of the same transaction are stored in the same partition according to the occurrence sequence. The reconciliation service subscribes to event streams in the message queue and aggregates related operations according to the transaction identification. The service maintains two core verification models, namely a fund share sum verification model tracks the accumulated value of all investors holding shares under each product and compares the accumulated value with the total product share field in real time, and a fund change verification model gathers the fund change direction and the amount of all transaction running water and checks the fund general ledgers of the accounting system one by one.
In the verification process, if the sum of the shares is found to deviate from the total share of the product or the fund change is not matched with the running water sum, the reconciliation service immediately triggers an abnormal alarm. The system automatically freezes the transaction authority of the relevant investors, and starts a difference repair flow, namely, positioning to a specific operation node through a transaction identifier, reconstructing a data change site by utilizing a pre-submitted log playback mechanism, and executing incremental repair or flushing transaction. For example, when the total share of a certain fund product is 1 ten thousand more than the sum held by investors, the system automatically initiates a reverse transaction to adjust the total share and trace back the complete life cycle of the abnormal transaction.
The real-time reconciliation mechanism realizes the full-link transparent monitoring of funds and shares by combining event tracing and stream computing. The ordered partition design of the message queue ensures causal consistency of cross-fragment transactions, so that even in a high concurrency scenario, the reconciliation service can accurately restore the transaction timing. The mechanism enhances the detection capability of the system on the hidden data drift, and by continuously tracking the accumulated trend of the tiny deviation, the system can early warn before the data abnormality reaches the risk threshold value, thereby avoiding the quality change caused by the variable quantity. The characteristic successfully identifies a plurality of share deviation cases caused by rounding errors in long-term operation, eliminates potential risks in a sprouting state, and builds an intelligent data safety barrier for steady operation of a financial system.
In some embodiments, there are also data anomaly scenarios in the system, such as:
If the data is lost, the complement (such as replaying the missing INSERT statement) is reversely searched through the operation log.
If the status is inconsistent, a positive transaction is triggered (e.g., a reverse redemption operation is performed).
For the data exception scenario in the transaction system, the following systematic processing mechanism is provided to ensure data integrity and business continuity:
For a data loss scene, the system realizes data recovery through an operation log tracking and automatic repairing mechanism. Each fragment node generates a structured operation log in the transaction processing process, and the log record comprises a transaction serial number, an operation type, an influence data primary key, an execution time stamp and other core fields. The log storage adopts a double-writing mode, and simultaneously writes in a local disk and a remote log service. The data integrity checking service scans the serial number continuity of the transaction in the time sequence database at fixed time, and when the serial number discontinuity or the abnormal time stamp interval is detected, the log rechecking process is automatically triggered. The reverse checking flow is based on the time window of the missing serial number, operation records in the corresponding time period are extracted from the log service, and missing database operation sentences are analyzed. The complement adopts a power-equal retry strategy to selectively replay the INSERT operation, so as to avoid the main key conflict caused by repeated writing.
For a state inconsistent scene, the system realizes state repair through a transaction state machine and an automatic alignment mechanism. The distributed transaction coordinator continuously monitors the submitting state of each fragment in the transaction submitting stage, and marks the transaction as suspicious when detecting that the response of the fragment is overtime or the verification result conflicts. The flushing engine scans suspicious state transactions at regular time, and intelligently generates reverse operation instructions according to the original transaction types. For example, for an incompletely submitted fund redemption transaction, a corresponding amount of fund share add-back operation and a fund deduction withdrawal operation are automatically created. Before executing the flushing operation, the flushing state of the original transaction needs to be verified, and the original transaction fingerprint is embedded into the flushing transaction as an associated identifier, so that repeated flushing or false flushing is prevented. The flushing transaction and the normal transaction share the same set of locking mechanism and transaction protocol, so that the atomicity and consistency of the reverse operation are ensured.
Through the accurate tracing of the operation log and the intelligent matching of the flushing mechanism, the system realizes the self-healing of faults while ensuring high throughput. The double storage mode of the operation log solves the technical problem of recovery failure caused by single-point log loss, and the transaction fingerprint-based positive correlation mechanism effectively solves the problem of repair delay in the traditional timing reconciliation scheme. Particularly, in the slicing dynamic migration scene, the system can still maintain the repairing integrity of the cross-slicing transaction, and the characteristic ensures that the system can still maintain the consistency of financial data in the elastic expansion process, thereby remarkably improving the fault tolerance of the distributed architecture.
It should be noted that embodiments of the present disclosure may be implemented in software and/or a combination of software and hardware, for example, using an Application Specific Integrated Circuit (ASIC), a general purpose computer, or any other similar hardware device. In one embodiment, the software programs of the various embodiments of the present disclosure may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the various embodiments of the present disclosure may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of embodiments of the present disclosure may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Additionally, at least a portion of the various embodiments of the present disclosure may be implemented as a computer program product, e.g., computer program instructions, which when executed by a computing device, may invoke or provide methods and/or techniques in accordance with the various embodiments of the present disclosure by way of operation of the computing device. Program instructions for invoking/providing the methods of the embodiments of the present disclosure may be stored in fixed or removable recording media and/or transmitted via a data stream in a broadcast or other signal bearing medium and/or stored within a working memory of a computing device operating according to the program instructions.
It will be apparent to those skilled in the art that the embodiments of the present disclosure are not limited to the details of the above-described exemplary embodiments, and that the embodiments of the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of embodiments of the disclosure being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Claims (10)
1. A method of fund registration by a distributed transaction mechanism, wherein the method comprises the steps of:
receiving a transaction request submitted by a user, wherein the transaction request comprises an investor identification, a foundation product code, a transaction type and a transaction amount/share;
Inquiring a global routing table according to the investor identification to determine a corresponding target investor fragment, and inquiring the global routing table according to the fund product code to determine a corresponding target fund product fragment, wherein the hot fund product is based on dynamic judgment and is stored by adopting independent fragments;
Transmitting a pre-submission request about the current transaction to the target investor fragment and the target foundation product fragment in parallel;
Distributing a thread from each independent thread pool in the target investor fragment and the target foundation product fragment to execute the transaction related to the transaction, wherein the transaction object row is locked and the reservation operation of balance/share change is executed;
When receiving a pre-submission completion response asynchronously returned by the target investor fragments and the target foundation product fragments, sending a submission instruction in batches;
Finishing data change about the transaction in the target investor fragment and the target foundation product fragment;
writing the transaction flow of the current transaction into a time sequence database.
2. The method of claim 1, wherein a set of base hot fund products is determined based on a statistical transaction frequency within a sliding time window, the set of base hot fund products being expanded in conjunction with a gradient threshold to dynamically determine the hot fund products.
3. The method according to claim 2, wherein the method further comprises the steps of:
and the data of the hot foundation product subjected to the dynamic judgment is migrated from the source shard to the new independent shard through double-write verification and progressive routing switching.
4. The method of claim 1, wherein,
Allocating a thread from the independent thread pool of the target investor fragments to execute the transaction related to the transaction, wherein the thread comprises a transaction object row locking operation and a reservation operation for executing balance/share variation;
And allocating a thread from the independent thread pool of the target foundation product fragments to execute the transaction related to the transaction, wherein the thread comprises locking foundation product lines and executing reservation operation with share change.
5. The method of claim 4, wherein,
The target investor slices lock the transaction object row by pessimistic locks;
The target foundation product fragments are hot foundation product fragments, pessimistic locks are adopted for locking the foundation product rows, the target foundation product fragments are common foundation product fragments, and optimistic locks based on version numbers are adopted for locking the foundation product rows.
6. The method of claim 1, wherein the method further comprises the steps of:
if any of the fragments pre-commit time-outs or fails, triggering global rollback, and all fragments cancel reservation operations.
7. The method of claim 1, wherein the method further comprises the steps of:
in the target investor segment and the target foundation product segment, all operations in the pre-commit stage are logged in a pre-commit log, so that accurate rollback or continuous execution after the transaction is interrupted is ensured.
8. The method of claim 1, wherein the bulk send commit instruction specifically comprises:
the commit instructions for the plurality of transactions are packaged into a bulk commit request package to send the bulk commit request package to the investor shards and the fund product shards associated with the plurality of transactions.
9. The method of claim 1, wherein the method further comprises the steps of:
Capturing a data change event through a Binlog monitoring tool;
Pushing the data change event to a Kafka cluster in real time;
The accounting service consumes the data change event in the Kafka cluster, and compares the fund share in the business system with the fund ledger in the accounting system;
Wherein, the check rule is:
Σ personal share = product total share;
Funds change = aggregate sum of transactions.
10. A distributed transaction mechanism based fund registration system, wherein the system comprises:
A gateway for receiving a transaction request submitted by a user, the transaction request including an investor identification, a fund product code, a transaction type, and a transaction amount/share;
The route decision module is used for inquiring a global route table according to the investor identification to determine a corresponding target investor fragment, inquiring the global route table according to the fund product code to determine a corresponding target fund product fragment, wherein the hot fund product is based on dynamic judgment and is stored by adopting independent fragments;
A coordinator for:
Transmitting a pre-submission request about the current transaction to the target investor fragment and the target foundation product fragment in parallel;
When receiving a pre-submission completion response returned by the target investor fragments and the target foundation product fragments, sending a submission instruction in batches;
Writing the transaction flow of the transaction into a time sequence database;
The target investor sharer is configured to:
in response to the pre-commit request, allocating a thread from its own independent thread pool to perform transactions pertaining to the transaction, including locking transaction object rows and performing reservation operations for balance/share changes;
Responding to the submitting instruction to complete the data change about the transaction;
The target fund product is fragmented and used for:
In response to the pre-commit request, allocating a thread from its own independent thread pool to perform transactions relating to the transaction, including locking the fund product line and performing reservation operations for share variations;
and responding to the submitting instruction, and finishing the data change about the current transaction.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510741335.7A CN120258997A (en) | 2025-06-05 | 2025-06-05 | A fund registration and transfer method and system based on distributed transaction mechanism |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510741335.7A CN120258997A (en) | 2025-06-05 | 2025-06-05 | A fund registration and transfer method and system based on distributed transaction mechanism |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120258997A true CN120258997A (en) | 2025-07-04 |
Family
ID=96197202
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510741335.7A Pending CN120258997A (en) | 2025-06-05 | 2025-06-05 | A fund registration and transfer method and system based on distributed transaction mechanism |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120258997A (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150317349A1 (en) * | 2014-05-02 | 2015-11-05 | Facebook, Inc. | Providing eventual consistency for multi-shard transactions |
| CN110716961A (en) * | 2019-09-18 | 2020-01-21 | 平安银行股份有限公司 | Data processing method and device |
| CN113191906A (en) * | 2021-05-26 | 2021-07-30 | 中国建设银行股份有限公司 | Service data processing method and device, electronic equipment and storage medium |
| CN118760726A (en) * | 2024-09-05 | 2024-10-11 | 杭州乒乓智能技术有限公司 | A method for writing and querying massive account flow data in a database based on Kafka |
| CN119088882A (en) * | 2024-08-28 | 2024-12-06 | 招商银行股份有限公司 | Database splitting method, device, equipment and storage medium |
| CN119904228A (en) * | 2023-10-26 | 2025-04-29 | 腾讯科技(深圳)有限公司 | Transaction processing method, device, computer equipment and storage medium |
-
2025
- 2025-06-05 CN CN202510741335.7A patent/CN120258997A/en active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150317349A1 (en) * | 2014-05-02 | 2015-11-05 | Facebook, Inc. | Providing eventual consistency for multi-shard transactions |
| CN110716961A (en) * | 2019-09-18 | 2020-01-21 | 平安银行股份有限公司 | Data processing method and device |
| CN113191906A (en) * | 2021-05-26 | 2021-07-30 | 中国建设银行股份有限公司 | Service data processing method and device, electronic equipment and storage medium |
| CN119904228A (en) * | 2023-10-26 | 2025-04-29 | 腾讯科技(深圳)有限公司 | Transaction processing method, device, computer equipment and storage medium |
| CN119088882A (en) * | 2024-08-28 | 2024-12-06 | 招商银行股份有限公司 | Database splitting method, device, equipment and storage medium |
| CN118760726A (en) * | 2024-09-05 | 2024-10-11 | 杭州乒乓智能技术有限公司 | A method for writing and querying massive account flow data in a database based on Kafka |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11921746B2 (en) | Data replication method and apparatus, computer device, and storage medium | |
| JP5172043B2 (en) | Matching server for financial exchange with fault-tolerant operation | |
| US8868492B2 (en) | Method for maximizing throughput and minimizing transactions response times on the primary system in the presence of a zero data loss standby replica | |
| EP3121722B1 (en) | Match server for a financial exchange having fault tolerant operation | |
| US7882286B1 (en) | Synchronizing volumes for replication | |
| US10055250B2 (en) | High performance log-based parallel processing of logs of work items representing operations on data objects | |
| EP3118743B1 (en) | Fault tolerance and failover using active copy-cat | |
| JP3790589B2 (en) | Commitment method for distributed database transactions | |
| CN115098229B (en) | Transaction processing method, device, node device and storage medium | |
| Brewer | Spanner, truetime and the cap theorem | |
| US20070220059A1 (en) | Data processing node | |
| US11698917B1 (en) | Method for replacing a currently operating data replication engine in a bidirectional data replication environment without application downtime and while preserving target database consistency, and by using audit trail tokens that provide a list of active transactions | |
| CN109739935A (en) | Method for reading data, device, electronic equipment and storage medium | |
| US20120054533A1 (en) | Controlling Data Lag in a Replicated Computer System | |
| US9798639B2 (en) | Failover system and method replicating client message to backup server from primary server | |
| US20040107381A1 (en) | High performance transaction storage and retrieval system for commodity computing environments | |
| US20230110826A1 (en) | Log execution method and apparatus, computer device and storage medium | |
| CN111404737B (en) | Disaster recovery processing method and related device | |
| CN120258997A (en) | A fund registration and transfer method and system based on distributed transaction mechanism | |
| US10303699B1 (en) | Method for replacing a currently operating data replication engine with a new data replication engine without application downtime and while preserving target database consistency | |
| JP2000057030A (en) | Client-server system with double updating database | |
| Garcia-Munoz et al. | Recovery Protocols for Replicated Databases--A Survey | |
| Xu et al. | OceanBase Unitization: Building the Next Generation of Online Map Applications | |
| CN120353839A (en) | Multi-level caching method, equipment and medium for distributed database | |
| Köstler et al. | SmartStream: towards byzantine resilient data streaming |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |