HK1075308B - Method and system for validating remote database - Google Patents
Method and system for validating remote database Download PDFInfo
- Publication number
- HK1075308B HK1075308B HK05107484.3A HK05107484A HK1075308B HK 1075308 B HK1075308 B HK 1075308B HK 05107484 A HK05107484 A HK 05107484A HK 1075308 B HK1075308 B HK 1075308B
- Authority
- HK
- Hong Kong
- Prior art keywords
- exception
- database
- update
- event
- remote
- Prior art date
Links
Description
CROSS-REFERENCE TO PRIORITY REQUIREMENT/RELATED APPLICATIONS
This non-provisional application claims priority from U.S. provisional patent application 60/330,842 filed on 11/1/2001, which is incorporated herein by reference in its entirety, and U.S. provisional patent application 60/365,169 filed on 3/19/2002, which is incorporated herein by reference in its entirety.
Technical Field
Embodiments of the invention relate generally to computer databases. More specifically, the embodiments provide methods and systems for reliably verifying remote database updates.
Background
With the ever increasing size and highly distributed structure of databases, it has become increasingly difficult to ensure that related databases within a network contain the same version of data. If significant changes occur to one database, the other database needs to be updated as quickly as possible to include the changes. Making these updates may involve frequently moving large amounts of update data to multiple databases. The potential complexity of such a process can be significant.
This problem may also occur in systems where communication is unreliable. In this case, data may be lost during transmission. Therefore, the data must be retransmitted and the other databases updated all over again. This duplication greatly reduces system efficiency and the extent to which the database contains up-to-date data.
Drawings
FIG. 1 is a block diagram of a system according to one embodiment of the invention;
FIG. 2 is a block diagram of a system hub according to one embodiment of the present invention;
FIG. 3 illustrates an exemplary transmission of a database update from a local database to a remote database according to one embodiment of the invention;
FIG. 4 illustrates sending a file according to one embodiment of the invention;
FIG. 5 illustrates initializing a sendfile according to one embodiment of the invention;
FIG. 6 is an illustrative timing diagram for sendfile and initializing sendfile generation in accordance with one embodiment of the present invention;
FIG. 7 is a flow diagram of one embodiment of the invention in which an update file for a local database may be generated;
FIG. 8 is a flow diagram of one embodiment of the invention in which a remote database may receive an update file from a local database.
FIG. 9 is a flow diagram of another embodiment of the present invention in which a remote database may receive and validate an update file from a local database.
FIG. 10A is a flow diagram of an embodiment of the invention in which an update file may be verified.
FIG. 10B is a flow diagram of an embodiment of the invention in which an update file may be verified.
FIG. 11 illustrates update file verification according to one embodiment of the invention.
Detailed Description
Embodiments of the present invention provide methods and systems for verifying remote database updates over a network. The local database records and remote database records may be compared and exceptions (exceptions) may be generated. Each exception may describe a discrepancy between remote and local database records. An exception identifier may be associated with each exception, where the exception identifier may be associated with an identifier of the record. An event identifier may be associated with each event in the update, where the event identifier may be associated with an identifier of the record. The event and exception corresponding to the record may be compared to determine if the update is valid.
FIG. 1 is a block diagram illustrating a system according to one embodiment of the invention. Typically, the system 100 may be equipped with a large, memory-resident database that receives search requests and provides search responses over a network. For example, system 100 may be a Symmetric Multiprocessing (SMP) computer such as manufactured by International Business machines corporation of Armonk, N.Y.M80 or S80, Sun Enterprise, manufactured by Sun microsystems, Inc. of san Crick, CalifTM10000, etc. System 100 may also be a multiprocessor personal computer, such as Pa, CalifCompaq ProLiant manufactured by Hewlett-Packard company of lo AltoTMML530 (comprising two Intels)III866MHz processor). System 100 may also include a multiprocessing operating system, such as IBMSun SolarisTM8 operating Environment, Red HatAnd so on. The system 100 may receive periodic updates over the network 124, which may be simultaneously incorporated into the database. By incorporating each update into the database without using database locking or access control, embodiments of the present invention can achieve very high database search and update throughput.
In one embodiment, system 100 may include at least one processor 102-1 coupled to a bus 101. Processor 102-1 may include an in-memory cache (e.g., an L1 cache, not shown for clarity). A secondary memory cache 103-1 (e.g., an L2 cache, an L2/L3 cache, etc.) may reside between the processor 102-1 and the bus 101. In a preferred embodiment, the system 100 may include a plurality of processors 102-1, … …, 102-P coupled to a bus 101. A plurality of secondary memory caches 103-1, … …, 103-P may also reside between the plurality of processors 102-1, … …, 102-P and the bus 101 (e.g., a snoop structure), or alternatively at least one secondary memory cache 103-1 may be coupled to the bus 101 (e.g., a backing mode). The system 100 may include a memory 104, such as a Random Access Memory (RAM) or the like, coupled to the bus 101 for storing information and instructions to be executed by the plurality of processors 102-1, … …, 102-P.
The memory 104 may store a large database, for example, for converting internet domain names to internet addresses, for converting names or telephone numbers to network addressesFor providing and updating user profile data, for providing and updating user current data, etc. Advantageously, the size of the database and the number of transitions per second can be very large. For example, the memory 104 may include at least 64GB of RAM and may be equipped with 500M (i.e., 500 × 10)6) A record domain name database, a 500M record user database, a 450M record phone number portability database, and so on.
In an exemplary 64-bit system architecture, for example, including at least one 64-bit large processor 102-1 coupled to at least a 64-bit bus 101 and a 64-bit memory 104, an 8-byte pointer value may be written to a memory address on an 8-byte boundary (i.e., a memory address divisible by eight, or 8N, for example) using a single uninterruptible operation. Typically, the presence of the secondary memory cache 103-1 may simply delay the writing of the 8-byte pointer to the memory 104. For example, in one embodiment, the secondary memory cache 103-1 may be a snoop cache operating in a write-through (WritE-through) mode, so that a single 8-byte store instruction may move eight bytes of data from the processor 102-1 to the memory 104 without interruption and in only two system clock cycles. In another embodiment, secondary memory cache 1031 may be a snoop cache operating in a write-back mode, so that the 8-byte pointer may be written to secondary memory cache 103-1 first, it may then write the 8-byte pointer to memory 104, for example, when writing a cache line in which the 8-byte pointer is stored to memory 104 (i.e., such as when a particular cache line or the entire secondary memory cache is "flushed").
Finally, from the perspective of processor 102-1, once the data is latched onto the output leads of processor 102-1, all eight bytes of data are written into memory 104 in one continuous uninterrupted transfer, which may be delayed, if any, by the action of secondary memory cache 103-1. From the perspective of the processors 102-2, … …, 102-P, once the data is latched onto the output leads of the processor 102-1, all eight bytes of data are written into the memory 104 in one continuous uninterrupted transfer, implemented using a cache coherency protocol by the auxiliary memory caches 103-1, … …, 103-P, which may delay writes to the memory 104, if any.
However, if an 8-byte pointer value is written to a mis-aligned location within the memory 104, such as a memory address that crosses an 8-byte boundary, all eight bytes of data cannot be transferred from the memory 102-1 using a single 8-byte store instruction. Instead, the processor 102-1 may issue two separate different store instructions. For example, if the memory address begins four bytes before an 8-byte boundary (e.g., 8N-4), the first store instruction transfers the 4 most significant bytes to the memory 104 (e.g., 8N-4), while the second store instruction transfers the 4 least significant bytes to the memory 104 (e.g., 8N). Importantly, between these two separate store instructions, processor 102-1 may be interrupted or processor 102-1 may loose control of another system component (e.g., processor 102-P, etc.) over bus 101. Thus, the pointer value residing within the memory 104 will be invalid until the processor 102-1 can complete the second store instruction. If another component starts a single uninterruptible memory read to this memory location, an invalid value is returned as the value that is estimated to be potentially valid.
Similarly, a new 4-byte pointer value may be written to a memory address divisible by 4 (e.g., 4N) using a single uninterruptible operation. Note that: in the example discussed above, a single store instruction may be used to write a 4-byte pointer value to an 8N-4 memory location. Of course, if a 4-byte pointer value is written to a location that crosses a 4-byte boundary, e.g., 4N-2, then all four bytes of data cannot be transferred from processor 102-1 using a single store instruction, and the pointer value residing in memory 104 may be invalid for a certain period of time.
System 100 may also include a Read Only Memory (ROM)106 or other static storage device coupled to bus 101 for storing static information and instructions for processor 102-1. A storage device 108, such as a magnetic disk or optical disk, may be coupled to bus 101 for storing information and instructions. System 100 may also include a display 110 (e.g., an LCD monitor) and an input device 112 (e.g., a keyboard, mouse, trackball, etc.) coupled to bus 101. The system 100 may include a plurality of network interfaces 114-1, … …, 114-O that can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. In one embodiment, network interface 114-1 may be coupled to bus 101 and a Local Area Network (LAN)122, and network interface 114-O may be coupled to bus 101 and a Wide Area Network (WAN) 124. The plurality of network interfaces 114-1, … …, 114-O may support various network protocols including, for example, gigabit ethernet (e.g., IEEE standard 802.3-2002, published 2002), fibre channel (e.g., ANSI standard x.3230-1994, published 1994), and so forth. A plurality of network computers 120-1, … …, 120-N may be coupled to the LAN 122 and the WAN 124. In one embodiment, LAN 122 and WAN124 may be physically distinct networks, while in another embodiment, LAN 122 and WAN124 may be via a network gateway or router (not shown for clarity). Alternatively, the LAN 122 and WAN124 may be the same network.
As noted above, system 100 may provide DNS resolution (resolution) services. In a DNS resolution embodiment, DNS resolution services may be generally divided between network delivery and data lookup functions. For example, the system 100 may be a back-end lookup engine (LUE) optimized for data lookup on large data sets, and the plurality of network computers 120-2, … …, 120-N may be a plurality of front-end Protocol Engines (PEs) optimized for network processing and transmission. The LUE may be a powerful multiprocessor server that stores the entire set of DNS records in memory 104 to facilitate high-speed, high-throughput searching and updating. In an alternative embodiment, the DNS resolution service may be provided using a series of powerful multiprocessor servers or LUEs, each storing a subset of the entire DNS record set in memory to facilitate high-speed, high-throughput searching and updating.
Conversely, multiple PEs may be generic low profilePC-based machines running efficient multitasking operating systems (e.g., Red Hat)) It minimizes the network processing transport load on the LUE to maximize the available resources for DNS resolution. The PEs may handle subtle differences in wired DNS protocols, respond to invalid DNS queries, and multiplex valid DNS queries to LUEs over LAN 122. In an alternative embodiment that includes multiple LUEs storing subsets of DNS records, a PE may determine which LUE should receive each valid DNS query and multiplex the valid DNS queries to the appropriate LUE. For example, the number of PEs for a single LUE may be determined by the number of DNS queries to be processed per second and the performance characteristics of a particular system. Other metrics may also be used to determine the appropriate mapping ratio and behavior.
In general, other large capacity query-based embodiments may be supported, including, for example, telephone number resolution, SS7 signaling processing, geographic location determination, telephone number-to-user mapping, user location and presence determination, and so forth.
In one embodiment, a central online transaction processing (OLTP) server 140-1 may be coupled to WAN124 and receive additions, modifications, and deletions (i.e., update traffic) to database 142-1 from a plurality of sources. OLTP server 140-1 may send updates, including a local copy of database 142-1, to system 100 over WAN 124. OLTP server 140-1 may be optimized for handling update traffic in a variety of formats and protocols, including, for example, HyperText Transmission Protocol HyperText transfer Protocol (HTTP), Registry Protocol registration Server Protocol (RRP), Extensible Provisioning Protocol (EPP), Service Management System traffic Management System/800 Mechanically Generalized Interface (MGI), and other on-line provisioning protocols. The constellation of read-only LUEs may be deployed in a hub and spoke (spoke) architecture to provide high-speed search capabilities in conjunction with large incremental updates from OLTP server 140-1.
In an alternative embodiment, the data may be distributed across multiple OLTP servers 140-1, … …, 140-S, each of which is coupled to WAN 124. OLTP servers 140-1, … …, 140-S may receive additions, modifications, and deletions (i.e., update traffic) to their respective databases 142-1, … …, 142-S (not shown for clarity) from various sources. OLTP servers 140-1, … …, 140-S may send updates to system 100 over WAN124, which may include copies of databases 142-1, … …, 142-S and other dynamically created data, and so on. For example, in a geo-location embodiment, OLTP servers 140-1, … …, 140-S may receive update traffic from groups of remote sensors. In an alternative embodiment, multiple network computers 120-1, … …, 120-N may also receive additions, modifications, and deletions (i.e., update traffic) from various sources over the WAN124 or LAN 122. In this embodiment, multiple network computers 120-1, … …, 120-N may send updates as well as queries to the system 100.
In one embodiment, system 100 may include a remote database (e.g., remote database 210). New information or update traffic may be received from OLTP server 140-1, for example, over WAN 124. In one embodiment, the new information may include a modification to at least one existing element within the remote database. The system 100 may create a new remote database element based on new information received over the network and, without restricting search access to the remote database, write a pointer to the new element to the remote database using a single uninterruptible operation, such as a store instruction. In one embodiment, the processor 102-1 may comprise a word length of at least n bytes, the memory 104 may comprise a width of at least n bytes, and the store instruction may write n bytes to a remote memory address located on an n-byte boundary. In another embodiment, after writing the pointer to the remote database, the system 100 may physically delete the existing element corresponding to the new element from the memory 104.
In a DNS resolution embodiment, each PE (e.g., each of the plurality of network computers 120-1, … …, 120-N) may combine or multiplex several DNS query messages received over a wide area network (e.g., WAN 124) into a single Request SuperPacket and send the Request SuperPacket to the LUE (e.g., system 100) over a local area network (e.g., LAN 122). The LUE may combine or multiplex several DNS query message replies into a single Response SuperPacket and send the Response SuperPacket to the appropriate PE over the local area network. Typically, the maximum length of a request or response superpacket may be limited by the Maximum Transmission Unit (MTU) of the physical network layer (e.g., gigabit ethernet). For example, typical DNS query and reply message lengths of less than 100 bytes and 200 bytes, respectively, allow more than 30 queries to be multiplexed into a single request superpacket, and more than 15 replies to be multiplexed into a single response superpacket. However, a smaller number of queries (e.g., 20 queries) may be included in a single request superpacket to avoid MTU overflow (e.g., 10 replies) on the response. For larger MTU lengths, the number of multiplexed queries and responses may be increased accordingly.
Each multitasking PE may include an input thread and an output thread to manage DNS queries and responses, respectively. For example, an incoming thread may resolve (un-marshal) DNS query components from incoming DNS query packets received over a wide area network and multiplex several milliseconds of queries into a single request superpacket. The incoming thread may then send the request superpacket to the LUE over the local area network. Conversely, the output thread may receive a response superpacket from the LUE, demultiplex the replies contained therein, and arrange (marshal) the fields into valid DNS replies, which may then be sent over the wide area network. In general, as noted above, other large capacity query-based embodiments may be supported.
In one embodiment, the request superpacket may also include status information about each DNS query, such as source address, protocol type, and the like. The LUE may include state information and associated DNS replies in response superpackets. Each PE may then construct and return a valid DNS reply message using the information sent from the LUE. Thus, each PE may advantageously operate as a stateless machine, i.e., may constitute a valid DNS reply based on the information contained within the response superpacket. In general, the LUE may return a response superpacket to the PE from which the input superpacket originated; however, other variations are obviously possible.
In an alternative embodiment, each PE may maintain state information related to each DNS query and include a reference or control for state information within the request superpacket. The LUE may include a state information reference and an associated DNS reply within the response superpacket. Each PE then uses the state information reference sent from the LUE and the state information maintained thereon to construct and return a valid DNS reply message. In this embodiment, the LUE may return a response superpacket to the PE from which the input superpacket originated.
Fig. 2 is a block diagram of a hub and spoke architecture according to one embodiment of the present invention. In general, the system may include a local database 200 (which may be included within central OLTP hub 140) and one or more remote databases 210 (which may be included within LUE 100) connected to local database 200 via any connection mechanism, such as the internet or LAN 122. These databases may send and receive update data.
Referring to FIG. 3, in an embodiment of the present invention, local database 200 sends F sendfiles (sendfiles) 300-1, … …, 300-F and initializing sendfile 310 to remote database 210 to update remote database 210. Update files may be sent individually or in batches, such as multiple sendfiles 300, one sendfile 300 and one initializing sendfile 310, multiple sendfiles 300 and one initializing sendfile 310, a single sendfile 300, or a single initializing sendfile 310.
In one embodiment of the invention, processor 104 may receive sendfile 300 including update data and/or initializing sendfile 310 from local database 200. System 150 may receive sendfile 300 and initializing sendfile 310 at remote database 210 via communication interface 118. Processor 104 may then compare the updated data within sendfile 300 or initializing sendfile 310 with the corresponding data within remote database 210. If the data is different from the data in remote database 210, processor 104 may provide sendfile 300 or initializing sendfile 310 to remote database 210. Thus, the remote database 210 may then have updated data that matches the updated data within the local database 200.
Fig. 4 illustrates sendfile 300 according to one embodiment of the invention. The fields of the file 300 may include information such as a file identifier 400, a file generation time 402, a number of transactions within the file N404, a total length of the file 406, a checksum or any such error check indicator 408, and transactions 410-1, … …, 410-N (including transaction identifiers). These sendfile fields are examples used to illustrate the scope of embodiments of the present invention and are not intended to limit the scope thereof. Any useful fields may be included in sendfile 300.
Sendfile 300 includes changes to local database 200 between two points in time. These changes may include, for example, the addition of a new identifier (i.e., an identifier of a data record), the deletion of an existing identifier, the modification of one or more data records associated with an identifier, the renaming of an identifier, a no operation, and so forth. One or more of these changes may occur sequentially or may be invoked transactions. Sendfile 300 may include unique identifiers for these transactions. These transactions may be recorded in sendfile 300 in the order in which they appear in local database 200. Further, for transactions that include more than one change, the changes may be recorded within the transaction in the order in which they appear in local database 200.
In general, transaction identifiers may be assigned to transactions in any order. That is, the transaction identifier need not increase monotonically over time. For example, two sequential transactions may have a transaction identifier of 10004 followed by 10002. Thus, the order in which transactions occur may be determined by its location within the current file 300-F or its location within the previous file 300- (F-1). Typically, transactions may not span adjacent files 300 in order to fully complete a remote database update within one sendfile application. This prevents interruption of the update due to network latency, which may result in erroneous data on remote database 210.
Fig. 5 illustrates initializing sendfile 310 according to one embodiment of the invention. The fields that initialize sendfile 310 may include, for example, a file identifier 500, a file generation time 502, a number of transactions within the file N504, a total length of the file 506, a checksum or any such error check indicator 508, and a copy of the entire local database (data) 516. Initializing sendfile 310 may also include field 510 and field 512, where field 510 is file identifier 400 of the last sendfile 300 generated prior to generation of file 310 and field 512 is the identifier of the last transaction committed to local database 200 prior to generation of initializing sendfile 310. Data in the local and remote databases 200, 210 may be assigned to tables resident in the databases 200, 210. Databases 200 and 210 may support any number of tables. Therefore, when a database has tables, initializing sendfile 310 may include a field for each table indicating the number of records recorded within the table. For example, the domain name database may include a domain table and a name server table. Thus, initializing sendfiles may include a field indicating the number of records in the domain table and a field indicating the number of records in the name server table. This field may specify, for example, the name of the table, the key used to index the records in the table, and the number of records in the table. In addition, initializing sendfile 310 may include a field indicating the version (typically 1.0) of initializing sendfile 310. These initializing sendfile fields are examples used to illustrate the scope of embodiments of the present invention, and are not intended to limit the scope thereof. Any useful fields may be included in initializing sendfile 310.
Initializing sendfile 310 may include, for example, a read-consistent copy of the entire local database 200, as previously described. Initializing sendfile 310 may become consistent with local database 200 at a point in time t between ts, which is the time at which generation of initializing sendfile 310 begins, and tf, which is the time at which generation is complete. As such, the only operation that may occur within initializing sendfile 310 is an "add" operation. That is, when initializing sendfile 310 is generated, a copy of the entire local database 200 at time t may be recorded within initializing sendfile 310. Thus, an "add" operation may be performed to record local database 200 within initializing sendfile 310. The identifiers may be recorded in initialization sendfile 310 in any order. Alternatively, where an external identifier is present, the referenced data record may be recorded before the referenced data record.
The addition of fields 510 and 512 may provide initializing sendfile 310 with some knowledge of sendfile 300 that may be generated and submitted to remote database 210 at the same time initializing sendfile 310 is generated. However, the generation of sendfile 300 and initializing sendfile 310 may be decoupled from each other for the generation of one party independent of the other party. Such a structure and process may prevent inefficient methods in which sendfile generation and applications may be suspended until initializing sendfile generation may be completed. By continuing to generate and apply sendfile 300 while initializing sendfile 310 is generated, as in one embodiment of the present invention, a strong error check of sendfile 300 may be accomplished and restrictions, such as unique restrictions or external identifier restrictions, may be placed on remote database 210. The setting of the restrictions may protect the integrity of data within remote database 210 by not allowing transactions that violate the relational model of remote database 210. For example, a unique restriction may prevent the same key from being stored more than once within database 210.
Fig. 6 is a schematic timing diagram of sendfile and initializing sendfile generation according to one embodiment of the invention. In fig. 6, transmission files 300(sf-1 to sf-21) are generated at predetermined time intervals. In an alternative embodiment, sendfile 300 may be generated at irregular time intervals. Typically, the generation of the sendfile does not use the entire time interval. For example, if a file is generated over a 5 minute interval, the entire 5 minutes are not used to complete the generation of the file. Furthermore, if changes occur in local database 200 while sendfile 300 is being generated, the changes will be captured in the next sendfile 300. For example, if sendfile sf-4 starts being generated at 12:05:00 and completes at 12:05:02, any changes to local database 200 that occur between 12:05:00 and 12:05:02 are captured within sendfile sf-5 (e.g., 300-5), where the sendfile captures a time period from 12:05:00 to 12:10: 00.
Sendfiles 300-5 and 300-19 are illustrated in fig. 6. These files illustrate, among other fields, a file identifier 601(sf-5, sf-19), a file generation time 603, and a transaction identifier 605 (e.g., 10002). Note that: the transaction identifiers may not be monotonically ordered. As previously described, the transaction identifier may have a random value. However, the related transactions themselves are recorded in sendfile 300 in the order in which they appear in local database 200.
Because initializing sendfile 310 generation and sendfile 300 generation may be decoupled, initializing sendfile 310 may be generated at any time. For example, initializing sendfile 310 may be generated before, during, or after generation of sendfile 300. Fig. 6 illustrates an initialization sendfile 310 generated between fourth and fifth sendfiles (e.g., sf-4 and sf-5).
In one embodiment, initializing sendfile 310 includes, among other fields, a file identifier 610(isf-1), a file identifier 615 of the last sendfile generated prior to initializing sendfile generation, and a transaction identifier 620 of the last transaction committed prior to initializing sendfile generation. In this example, the last sendfile generated is sendfile sf-4 and the last transaction committed is transaction 10001. Initializing sendfile 310 begins generating 611 at 12:07: 29. At the same time that initializing sendfile 310 begins to be generated, the first half of the transactions within sendfile 300-5(sf-5), i.e., transactions 10002, 10005, and 10001, have been committed to local database 2000. Accordingly, initializing sendfile 310 may already be aware of these transactions and may capture these transactions within initializing sendfile 310. However, initializing sendfile 310 may not be aware of subsequent transactions 10003 and 10004 that occur after initializing sendfile generation begins.
While initializing sendfile 310 may be generated, sendfiles beginning with sendfile 300-5 may continue to be generated at regular intervals. These files may be sent to remote database 210 and applied.
Initializing sendfile 310 may complete generation at 1:15:29 in between the generation of 18 th and 19 th sendfiles 300(sf-18 and sf-19) and may not affect the generation of 19 th sendfiles 300-19.
After receiving and loading initializing sendfile 310 on remote database 210, remote database 210 may disregard sendfiles generated prior to generation of initializing sendfile 310. This may be due, for example, to initializing sendfile 310 including all changes to local database 200 recorded within previous sendfile 300. In this example, remote database 210 may not need to consider the first through fourth sendfiles (sf-1 through sf-4). Changes recorded in these sendfiles sf-1 through sf-4 may also be recorded in initializing sendfile 310. These previously sent files (sf-1 to sf-4) may be deleted or alternatively archived. Similarly, remote database 210 may not consider transactions included within a backward generated sendfile 300 that were committed prior to initiating generation of sendfile 310. Initializing sendfile 310 may include these transactions when initializing sendfile 310 is generated. For example, because of these transactions, remote database 210 may not need to consider the first three transactions 10002, 10005, and 10001 to send file sf-5. These transactions recorded in sendfile sf-5 may also be recorded in initializing sendfile 310. These committed transactions may be deleted or, alternatively, archived.
FIG. 7 is a flow diagram of one embodiment of the invention in which an update file for a local database may be generated. The system may generate (705) a plurality of periodic updates based on incremental changes to the local database. Each update may include one or more transactions. The system may then send (710) these periodic updates to the remote database. While generating the periodic updates, the system may generate (715) an initialization update at a start time. The initialization update may include a version of the entire local database. The system may determine (720) the last periodic update generated before the start time and the last transaction committed before the start time. The system may then send 725 the initialization update to the remote database. The initialization update may include an update identifier associated with the last periodic update generated and a transaction identifier associated with the last transaction committed.
For example, OLTP140 may generate 705 sendfile 300 at some regular or irregular interval. OLTP140 may send (710) sendfile 300 to remote database 210. At the same time as sendfile 300 is generated, OLTP140 may begin generating (715) initializing sendfile 310 at start time 611. Initializing sendfile 310 may include a copy of the entire local database 200. OLTP140 may then determine the last sendfile 300 generated before start time 611 for generating initializing sendfile 310 and the last transaction committed before start time 611 for generating initializing sendfile 301. OLTP140 may then send 725 initialized sendfile 310 to remote database 210. Initializing sendfile 310 may include sendfile identifier 615 associated with the last sendfile 300 generated and transaction identifier 620 associated with the last transaction committed.
FIG. 8 is a flow diagram of an embodiment of the invention in which a remote database may receive an update file from a local database. The system may receive (805) a plurality of periodic updates. Each update may include one or more transactions. The periodic updates may be received individually or in batches. The system may receive (810) an initialization update at a time. The initialization update may include a version of the entire local database. The system may read (815) a last periodic update identifier and a last transaction identifier from the initialization update. The system can then determine (820) a last periodic update associated with the update identifier and a last transaction associated with the transaction identifier. The periodic update and the transaction may be the last to be generated and committed, respectively, before the generation of the initialization update. The system may apply (825) the uncommitted transactions remaining within the respective periodic update to the remote database. The system may then apply (830) the remaining periodic updates generated after the last periodic update to the remote database. Applying the initialization update advantageously compensates for any previously lost periodic updates.
For example, LUE100 may receive 805 sendfile 300 at some regular or irregular time interval. Sendfiles 300 may be received individually or in batches. LUE100 may receive 810 initialization sendfile 310 at a certain time. LUE100 may read (815) sendfile identifier 615 and transaction identifier 620 from initializing sendfile 310. LUE100 may then determine (820) sendfile 300 associated with sendfile identifier 615 and transaction 605 associated with transaction identifier 620. Sendfile and transaction may be the last sendfile and transaction generated and committed, respectively, before initializing sendfile 310 is generated. LUE100 may apply 825 the remaining uncommitted transactions 605 within the corresponding sendfile 300 to remote database 210. LUE100 may then apply 830 the remaining sendfile 300 after the last sendfile sf-4 to remote database 210.
In an alternative embodiment, for example, LUE100 may discard or archive sendfiles 300 that have not been applied to remote database 210 and/or that have generation time 603 prior to initializing sendfile generation time 611. Sendfile 300 that is discarded or archived may include sendfile sf-4 associated with sendfile identifier 615.
It can be understood that: after initializing sendfile 310 is applied, any subsequent sendfiles 300 that may have been applied to remote database 210 may be lost because remote database 210 may become read consistent with initializing sendfile 310. Accordingly, these subsequent sendfiles 300 may be reapplied.
In one embodiment of the present invention, sendfile 300 and initialization sendfile 310 may be transmitted from local database 200 to remote database 210 without acknowledgement, i.e., without an ACK/NACK signal to indicate that the files were successfully received. This advantageously reduces the overhead that may be generated by the ACK/NACK signal.
In an alternative embodiment, an ACK/NACK signal may be sent from the remote database 210 to indicate successful receipt of the file. In this embodiment, the ACK/NACK signal may be transmitted within the system of unreliable communication.
FIG. 9 is a flow diagram of another embodiment of the present invention in which the system may validate update files sent from a local database and received at a remote database. Here, the system may send (905) a plurality of periodic updates. Each update may include one or more transactions. The periodic updates may be sent individually or in batches. The system may send (910) an initialization update at a time and apply the initialization update to the remote database. The initialization update may include a version of the entire local database. The system may first identify (915) differences between the local and remote databases by comparing the databases. The system may determine 920 whether the differences are valid or erroneous. The system may then apply (925) these periodic updates to a remote database, according to one embodiment of the invention. This embodiment advantageously ensures that there are no errors in the remote database because updates are received from the local database.
For example, OLTP140 may send 905 sendfile 300 to remote database 210 at some regular or irregular interval. Sendfiles 300 may be sent individually or in batches. OLTP140 may send 910 initialization sendfile 310 to LUE100 at some time, and LUE100 may apply initialization sendfile 310 to remote database 210. OLTP140 may compare local database 200 with remote database 210 and identify 915 differences therebetween. OLTP140 may then determine 920 whether the differences are valid or erroneous. OLTP140 may then notify LUE100 that file 300 is to be applied 925 to remote database 210, according to one embodiment of the present invention. LUE100 may then apply sendfile 300 to remote database 210.
In an alternative embodiment, the system may apply sendfiles and initialize sendfiles before identifying and verifying discrepancies. Alternatively, the system may apply sendfiles and initialize sendfiles after identifying and verifying the differences.
It can be understood that: to apply the transmitted data to the destination, an authentication process may be performed on any data transmitted from the source to the destination over the network.
FIG. 10A is a flow diagram of an embodiment of sendfile and initializing sendfile validation according to the present invention. After sending a plurality of periodic updates and initialization updates to the remote database, the system may validate the updates. Each update may include one or more transactions performed on the local database. Each transaction may include one or more events. An event is a database action or event, such as an addition, modification, deletion, and so forth, with respect to data within the database.
First, the system may compare (1000) records in the remote database with corresponding records in the local database. The system may generate (1005) exceptions that describe the differences between the remote and local database records, where one exception may be generated for each difference. The difference may be any difference in at least one data value between different versions of the same record. For example, the data record within the local database may be (12345, xyz.com, 123.234.345). Com, 123.234.345, may be (12345, abc.8978) assuming that the corresponding data records within the same remote database. Thus, there is a difference in the recorded second data value. Thus, an embodiment of the present invention may generate an exception describing this difference. Exceptions may describe differences by simply indicating that a difference exists, by specifying the location of the difference, by describing the difference between two data values in the difference, etc. The data records in the local database correspond to the data records in the remote database (and vice versa) if it is assumed that the two records contain the same data.
It can be understood that: a discrepancy may refer to a discrepancy between one or more data values within a record or an entire record.
The system may associate (1010) an exception identifier with each exception, where the exception identifier may be associated with an identifier of the record. For example, the data record (12345, xyz.com, 123.234.345) may have an identifier d 10. Thus, the exception identifier may also be d 10. Each exception may be classified as belonging to any one of a plurality of exception (or difference) types. An exception list may be formed to include exception types and exception identifiers in which the exceptions are divided. The exception list and the different exception types will be described in detail later. The system may also correlate (1015) an event identifier with each event in the update, where the event identifier may be correlated with an identifier of the record. For example, the data record (12345, xyz.com, 123.234.345) may include identifier d 10. Thus, the event identifier may also be d 10. Each event in the update may be discovered from an event history. The event history may be a list of events performed on records within the local database over a period of time, or the like. The event history will be described in detail later.
The system may then determine (1020) whether the recorded update is valid. FIG. 10B is a flow diagram of an embodiment of a verification determination. This determination may be performed as follows. Each event may be compared (1022) with each exception. If each exception is justified (1024) by an event, the update may be designated (1026) as valid and may be applied to a remote database. Conversely, if each exception is not justified (1024) by the event, the update may be designated (1028) as invalid and the exception may be logged as an error. An exception may be justified when the event identifier corresponds to an exception identifier and the related event corresponds to a valid sequence of events related to the exception type. The effective sequence will be described in detail later. If the exception is justified, the system may remove the exception identifier from the exception list. A justified exception may indicate that the discrepancy is a valid discrepancy, e.g., the remote database has not received the update, but will actually match the local database when the update is received.
During validation, the system may identify potential errors or faults in periodic and initialization updates. The system can ensure that these updates can be structurally and semantically correct, that they can be successfully applied without generating exceptions or otherwise annoying pauses, that comparisons between local and remote databases can accurately detect errors, and that high-profile data is not accidentally deleted. The system may ensure that periodic and initialization updates may be successfully applied to the remote database.
By attempting to apply updates to the remote database during the validation process, many errors can be advantageously discovered. For example, data-centric errors, warnings that a target already exists within the remote database, or warnings that an external identifier violation exists may be discovered during an application attempt. Thus, after performing the validation process of one embodiment of the present invention, the system may attempt to apply these updates to the remote database. The attempt may fail, which may indicate that there are additional errors in the update that invalidate the update. Therefore, no further attempts are made to apply these updates to the remote database.
In an alternative embodiment, it may be attempted to apply at least one update before performing the verification. If the attempt fails, the verification can be skipped and the update discarded. On the other hand, if the attempt is successful, verification may be performed and updates that are valid and updates that are invalid for the difference record may be maintained.
In an exemplary embodiment, OLTP140 may validate sendfile 300 and initializing sendfile 310 to ensure that sendfile 300 and initializing sendfile 310 may be successfully applied to remote database 210.
In alternative embodiments, the network computer 121, the LUE100, or any combination of existing systems may perform the authentication.
Referring to FIG. 10A, OLTP140 may compare local database 200 and remote database 210 to determine any exceptions (or differences) therebetween. Exceptions can include three types: data may be in remote database 210 and not in local database 200; data may be in local database 200 but not in remote database 210; alternatively, the corresponding data may be in the local database 200 and the remote database 210, but the data may be different. Of course, the corresponding data may be in local database 200 and remote database 210, and the data may be the same, in which case the data may be considered valid and thus no further processing by OLTP140 is required.
It can be understood that: a discrepancy may refer to one or more data values within a record or the entire record.
Accordingly, OLTP140 may compare (1000) corresponding records within local database 200 and remote database 210. OLTP140 may generate (1005) an exception describing the difference between the records in remote database 210 and the records in local database 200, where an exception may be generated for each difference. OLTP140 may associate (1010) an exception identifier with each exception, where the exception identifier may be associated with an identifier of a record. An exception list may be formed to include an exception type and an exception identifier for an exception belonging to the exception type. In one embodiment, an exception may be designated as a "list 1" exception (or difference) if the exception is of a first exception type, a "list 2" exception if the exception is of a second exception type, or a "list 3" exception if the exception is of a third exception type. Fig. 11 illustrates an exemplary exception list 1140.
It can be understood that: the presence of an exception identifier on the exception list may not imply that sendfile 300 or initializing sendfile 310 is bad because all three types of exceptions may reasonably occur, for example, due to a time delay between a change to local database 200 and an update applied to remote database 310. Such delays may be due to, for example, network congestion. Thus, verification may provide a mechanism to remove legitimate cases from erroneous data.
For initializing sendfile 310, OLTP140 may compare local database 200 and remote database 210 by performing a bi-directional full-table scan of both databases 200 and 210. That is, all data in the local database 200 may be compared with all data in the remote database 210. Subsequently, all data in the remote database 210 may be compared with all data in the local database 200. This advantageously provides an exhaustive comparison of databases 200 and 210 to find all differences.
For sendfile 300, OLTP140 may only compare data records in local database 200 and remote database 210 that are recorded in sendfile 300. This advantageously provides a fast query to find target differences.
Alternatively, random sampling of data within initializing sendfile 310 and/or sendfile 300 may be performed. OLTP140 may then compare the randomly sampled data within local database 200 and remote database 210.
Exception list 1140 may correspond to loss events such as additions (add), modifications (mod), and deletions (del) to local database 200 that are inconsistent with remote database 210. Therefore, to identify these candidate events, OLTP140 may examine the most recent transactions committed to local database 200. Typically, for each transaction submitted, an entry may be set in a record table stored in local database 200. This entry may include an identifier of the record being changed, the transaction (or event) that changed the record (e.g., add, mod, and/or del events), a record sequence number indicating the order of the transaction, and so forth.
An exemplary record table 1100 is illustrated in fig. 11. In this example, sendfile 300 includes transactions 1108 and 1114 illustrated within log table 1100. The first entry 1101 indicates that data (name servers) n1 and n2 were added to the data (domain) associated with identifier d1 within the first transaction 1108. Thus, the identifier is d1, the event is "add", and the record sequence number is 11526. Similarly, the second entry 1102 indicates that data n8 and n9 were added to the data associated with identifier d2 within the second transaction 1109. Third entry 1103 indicates that the data associated with identifier d3 was deleted within third transaction 1110. The fourth entry 1104 indicates that the data associated with the identifier d1 was modified in the fourth transaction 1111 to add the data n 5. For the fifth transaction 1112, the fifth entry 1105 indicates that the data n6 and n7 were added to the data associated with the identifier d 3. For the sixth transaction 1113, the sixth entry 1106 indicates that the data associated with identifier d4 was modified to remove data n 3. An Rth entry 1107 in the Rth transaction 1114 indicates the deletion of data associated with identifier d 5.
Accordingly, as shown in FIG. 10A, OLTP140 may correlate (1015) an event identifier with each event in the update, where the event identifier may be correlated with an identifier of the record. Each event in the update may be discovered from the event history. Event histories indexed and sorted by event identifier may be generated from the record table 1100. An exemplary event history 1120 is illustrated in FIG. 11. Here, the first and fourth entries 1101 and 1104 in the record table 110 indicate changes to the data associated with the identifier d 1. Thus, event history 1120 includes a d1 identifier 1121 and two events 1126 that are performed on data associated with identifier d1, "add" followed by "mod". The second entry 1102 represents a modification to the data associated with identifier d 2. Thus, event history 1120 includes d2 identifier 1122 and "add" event 1127. Event history 1120 includes a d3 identifier 1123 and two events 1128, "del" followed by "mod", representing third and fifth entries 1103 and 1105, which include changes to data associated with identifier d 3. Sixth entry 1106 represents a change to the data associated with identifier d 4. Thus, event history 1120 includes d4 identifier 1124 and "mod" event 1129. The R entry 1107 represents changes to data related to identifier d5, and event history 1120 includes a d5 identifier and a "del" event 1130. Identifiers 1121-1125 are sorted by d 1-d 5.
Referring again to FIG. 10A, OLTP140 may determine (1020) whether the update is valid. This determination may be performed, for example, according to the embodiment of fig. 10B. First, OLTP140 may compare 1022 event identifiers 1121, 1125 with exception identifiers 1140 to determine which identifiers correspond. For example, in FIG. 11, d1 event identifier 1121 in event history 1120 corresponds to the d1 exception identifier in "List 2" of exception list 1140. After finding the corresponding event and exception, OLTP140 may determine (1024) whether the event justifies the exception. The justification can be made as follows. For each event identifier 1121- "1125 within event history 1120, OLTP140 may determine whether the sequence of each event 1126-" 1130 within event history 1120 is valid. This may be done, for example, by examining exception list 1140 to determine which exception type each exception identifier belongs to, determining what should be a valid sequence of events for that exception type, and then searching event history 1120 for the corresponding event identifier and the sequence of events for that event identifier. The effective sequence for each exception type will be described in detail below. If the event sequences 1126 and 1130 in the event history 1120 match valid sequences, then the corresponding event identifiers 1121 and 1125 have valid sequences. Thus, exceptions associated with the exception identifier may be certified. Moreover, the corresponding transaction 1108-1114 that includes the event identifier is valid and not erroneous. In this case, OLTP140 may delete the exception identifier from exception list 1140.
The valid sequence of events for the "list 1" exception type may be (mod) × (del). This sequence may include zero or more "mod" event sequences followed by a "del" event, which is followed by any event. The "list 1" exception type may correspond to data that may exist within remote database 210 but not within local database 200. In this case, data may have been recently deleted from local database 200 and the transaction has not yet been written to sendfile 300. Thus, sendfile 300 may not have been applied to remote database 210. Thus, data may still exist within remote database 210. This may be considered a reasonable difference because at some point it is desirable to generate sendfile 300 and apply it to remote database 210. Thus, if any such sequence 1126 is found 1130 in event history 1120 for an exception identifier within list 1 of exception list 1140, the corresponding transaction can be considered valid.
For example, in FIG. 11, d5 identifier 1125 and its associated data have been deleted from local database 200 as indicated by the Rth entry 1114 of log table 1100 and indexed in event history 1120. At the time of validation, d5 has been deleted from local database 200, but has not been deleted from remote database 210. Exception list 1140, therefore, includes identifier d5 in list 1. From event history 1120, event 1130 associated with d5 identifier 1125 is "del". OLTP140 may compare the valid sequence of the "list 1" exception type, i.e., (mod) × (del), to d5 events 1130 within event history 1120. Because the "List 1" valid sequence matches event 1130, the delete transaction 1114 associated with identifier d5 may be considered legitimate and not erroneous. Accordingly, identifier d5 may be deleted from exception list 1140.
The valid event sequence for the "list 2" exception type may be (add). This sequence may include an "add" event followed by any event. The "list 2" exception type may correspond to data that is present in local database 200 but not present in remote database 210. In this case, the data may have been recently added to local database 200, and the transaction has not yet been written to sendfile 300. Thus, sendfile 300 may not have been applied to remote database 210. Therefore, this data may not exist in the remote database 210. This may also be considered a reasonable difference because sendfile 300 is expected to be generated and applied to remote database 210 at some point. Thus, if any such sequence 1126 is found 1130 in event history 1120 for an exception identifier within list 2 of exception list 1140, the corresponding transaction can be considered valid.
Referring again to FIG. 11, the d1 and d2 identifiers 1121 and 1123 may be associated with, for example, data initially added to the local database 200. Because their event sequences 1126 and 1127 begin with an "add" event, the d1 and d2 identifiers 1121 and 1123 match the valid sequences for the "list 2" exception type. Thus, transactions 1108 and 1109 that include these identifiers may be considered valid and identifiers d1 and d2 may be removed from exception list 1140. It should be noted that: the d3 identifier 1123 also includes an "add" event in its sequence 1128. However, an "add" event is not the first event in the sequence 1128. Thus, the sequence 1128 does not qualify as a "list 2" type. Furthermore, because d3 is not specified within list 2 of exception list 1140, OLTP140 may not check d3 for a valid sequence of list 2.
A valid sequence of events for the "List 3" exception type may be (del) (add) or (mod). These sequences may include a "del" event followed by an "add" event (followed by any event) or a "mod" event followed by any event. The "list 3" exception type may correspond to data that is present in both databases 200 and 210, but is different. In this case, the data may have been recently modified within local database 200 and the transaction has not yet been written to sendfile 300. Thus, sendfile 300 may not have been applied to remote database 210. Thus, the data associated with the identifier may not have been modified within the remote database 210. Again, this may be considered a reasonable difference, as sendfile 300 is expected to be generated and applied to remote database 210 at some point. Thus, if any such sequence 1126 is found 1130 within event history 1120 for the exception identifier of List 3 of exception list 1140, the corresponding transaction can be considered valid.
For example, in FIG. 11, the d3 and d4 identifiers 1123 and 1124 may be related to data being modified within the local database 200. In the case of the d3 identifier 1123, the d3 identifier 1123 and its data are initially deleted and then added with new data so that its sequence of events 1128 may include "del" followed by "add". In the case of the d4 identifier 1124, the d4 data is modified to remove the data so that its sequence of events 1129 may include "mod". Because these event sequences 1128 and 1129 match the valid sequences for the "list 3" exception type, their respective transactions 1110, 1112, and 1113 may be considered valid and the identifiers d3 and d4 removed from the list 1140.
Referring to fig. 10B, if all exceptions represented by their identifiers in exception list 1140 have been justified (1024) by an event, i.e., if exception list 1140 is empty, OLTP140 may designate (1026) sendfile 300 or initializing sendfile 310 as valid and notify LUE100 to apply sendfile 300 or initializing sendfile 310 to remote database 210. LUE100 may then apply sendfile 300 or initializing sendfile 310 to remote database 210.
Conversely, if all exceptions have not been justified 1024 by an event, i.e., if exception list 1140 is not empty, the remaining exceptions may represent errors within sendfile 300 or initializing sendfile 310. Accordingly, OLTP140 may designate (1028) sendfile 300 or initializing sendfile 310 as invalid and record the error in an error file.
In an alternative embodiment, for example, if sendfile 300 or initializing sendfile 310 is designated as invalid, OLTP140 may repeat the validation process on invalid sendfile 300 or initializing sendfile 310 after a predetermined period of time to ensure that the difference is indeed erroneous. This predetermined delay allows the network more time to send any slow sendfiles 300 and 310 and allows databases 200 and 210 more time to become read consistent.
In one embodiment of the invention, events within the remote database 210 may "lag" data within the local database 200 for a long period of time. Thus, to compare databases 200 and 210 and detect errors, databases 200 and 210 may be read in unison at the same point in time so that they are exact copies of each other. In general, remote database 210 may be rolled-up to local database 200, wherein the data within remote database 210 may be made substantially identical to the data within local database 200.
Thus, to speed up authentication, any currently generated initialization sendfile 310 and subsequent sendfiles 300 may be applied to remote database 210 before authentication is initiated. Therefore, the number of differences can be significantly reduced. This batch process of sending files 300 and 310 may be referred to as "chunking". The first and last sendfiles 300 and 310 within the block may be referred to as low and high water marks, respectively. A first chunk (referred to as an initial chunk) may include initializing sendfile 310. All following blocks (called terminal blocks) may include only the send file.
Chunking may provide group authentication rather than separate authentication. Thus, if an error is detected within a block, the entire block may be designated as invalid, rather than just sendfile 300 or initializing sendfile 310 in which the error occurred.
The mechanisms and methods of embodiments of the present invention may be implemented using a general purpose microprocessor programmed according to the teachings of the present invention. Thus, embodiments of the invention also include machine-readable media comprising instructions, which may be used to program a processor to perform a method according to an embodiment of the invention. The medium may include, but is not limited to, any type of disk including floppy disks, optical disks, and CD-ROMs.
Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.
Claims (16)
1. A method for verifying an update to a record in a remote database over a network, the update including at least one event, comprising:
comparing the records in the remote database with corresponding records in the local database;
for each difference, generating an exception describing the difference between the remote database record and the local database record;
associating an exception identifier with each exception;
associating an event identifier with each event in the update; and
determining whether an update is valid by comparing each event corresponding to the update with each exception corresponding to the update.
2. The method of claim 1, wherein the update is validated if each exception corresponding to the update is justified by an event corresponding to the update.
3. The method of claim 2, wherein the types of exceptions include:
a first exception type, in which the record is in the remote database, but not in the local database;
a second exception type in which the record is in the local database and not in the remote database; and
a third exception type, wherein the records are in both the local database and the remote database, and the data records in the local database are different from the data records in the remote database.
4. The method of claim 3, wherein a particular exception is evidenced by a particular event if:
the specific event is a deletion of a record from the local database, and the specific exception is a first exception type;
the specific event is an addition of a record to the local database, and the specific exception is a second exception type;
the specific event is a modification of a record within the local database, and the specific exception is a third exception type; or
The specific event is a deletion followed by a record being added to the local database, and the specific exception is a third exception type.
5. The method of claim 1, further comprising:
if the update is determined to be invalid, the method for verifying the update is repeated a given number of times after the update is determined to be invalid.
6. The method of claim 1, wherein comparing comprises:
the entire local database is compared to the entire remote database.
7. A system for verifying an update to a record in a remote database over a network, wherein the update includes at least one event, the system comprising:
at least one processor coupled to a network through which the at least one processor is adapted to implement a method for verifying the update;
a memory coupled to the at least one processor;
a database coupled to the at least one processor; and
a first subsystem adapted to:
comparing the records in the remote database with corresponding records in the local database;
for each difference, generating an exception describing the difference between the remote database record and the local database record;
associating an exception identifier with each exception;
associating an event identifier with each event in the update; and
determining whether an update is valid by comparing each event corresponding to the update with each exception corresponding to the update.
8. The system of claim 7, wherein the update is valid if each exception corresponding to the update is justified by an event corresponding to the update.
9. The system of claim 8, wherein the types of exceptions include:
a first exception type, in which the record is in the remote database, but not in the local database;
a second exception type in which the record is in the local database and not in the remote database; and
a third exception type, wherein the records are in both the local database and the remote database, and the data records in the local database are different from the data records in the remote database.
10. The system of claim 9, wherein a particular exception is evidenced by a particular event if:
the specific event is a deletion of a record from the local database, and the specific exception is a first exception type;
the specific event is an addition of a record to the local database, and the specific exception is a second exception type;
the particular event is a modification of a record within the local database, and the particular exception is a third exception type; or
The specific event is a deletion followed by an addition of a record to the local database, and the specific exception is a third exception type.
11. The system of claim 7, wherein if the update is determined to be invalid, the at least one processor repeats the method for verifying the update a given number of times after determining that the update is invalid.
12. The system of claim 8, wherein the at least one processor compares the entire local database to the entire remote database.
13. The system of claim 7, further comprising:
at least one remote processor coupled to the network;
a remote memory coupled to the remote processor;
a remote database coupled to the at least one remote processor; and
a second subsystem adapted to:
creating a new element based on new information received over the network from the database; and
without restricting search access to the remote database, a pointer to the new element is written to the remote database using a single uninterruptible operation.
14. The system of claim 13, wherein the second subsystem is further adapted to:
the existing elements are physically deleted after the pointer is written to the database.
15. The system of claim 13, wherein the single uninterruptible operation is a store instruction.
16. The system of claim 15, wherein the at least one remote processor has a word length of at least n bytes, the remote memory has a width of at least n bytes, and the store instruction writes n bytes to remote memory addresses located on an n-byte boundary.
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US33084201P | 2001-11-01 | 2001-11-01 | |
| US60/330,842 | 2001-11-01 | ||
| US36516902P | 2002-03-19 | 2002-03-19 | |
| US60/365,169 | 2002-03-19 | ||
| PCT/US2002/035081 WO2003038653A1 (en) | 2001-11-01 | 2002-11-01 | Method and system for validating remote database |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1075308A1 HK1075308A1 (en) | 2005-12-09 |
| HK1075308B true HK1075308B (en) | 2010-06-25 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP4897196B2 (en) | Method and system for verifying a remote database | |
| AU2002350104A1 (en) | Method and system for validating remote database | |
| AU2002356885A1 (en) | Method and system for updating a remote database | |
| US7149759B2 (en) | Method and system for detecting conflicts in replicated data in a database network | |
| US8135763B1 (en) | Apparatus and method for maintaining a file system index | |
| US20030115202A1 (en) | System and method for processing a request using multiple database units | |
| HK1075308B (en) | Method and system for validating remote database | |
| Rabinovich et al. | Scalable Update Propagation in Epidemic Replicated | |
| UA79943C2 (en) | Method and system for updating a remote data base (variants) |