CN118170843A - Non-tilting mode data acquisition and synchronization method, system and equipment - Google Patents
Non-tilting mode data acquisition and synchronization method, system and equipment Download PDFInfo
- Publication number
- CN118170843A CN118170843A CN202410278074.5A CN202410278074A CN118170843A CN 118170843 A CN118170843 A CN 118170843A CN 202410278074 A CN202410278074 A CN 202410278074A CN 118170843 A CN118170843 A CN 118170843A
- Authority
- CN
- China
- Prior art keywords
- data
- debezium
- database
- kafka
- collecting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a method, a system and equipment for collecting and synchronizing non-tilting mode data, and relates to the field of data collection and synchronization, wherein the method comprises the steps of utilizing Debezium plug-in to obtain database logs of a multi-type database; collecting connectors according to the multi-type database registration Debezium; collecting connectors by using corresponding Debezium and temporarily storing database logs in a json form in a Kafka message queue; the acquired json-form data was synchronized into mongo db using ketle. The invention can reduce the load on the database terminal.
Description
Technical Field
The present invention relates to the field of data acquisition and synchronization, and in particular, to a method, system, and apparatus for acquiring and synchronizing data in a non-tilting mode.
Background
With the continuous development of medical informatization and urgent requirements for data utilization, many hospitals currently have various types of databases (Oracle, SQLServer, mySQL and the like) due to business expansion, and when the hospitals need to operate or utilize the data in the various types of databases, the data cannot be used well due to the difference of the databases, so that if the data in the various types of databases can be synchronized into one fixed type of database in real time, the utilization and operation of the data are greatly facilitated. And the database server resource of the hospital must always keep very strong stability, thus providing data for various medical systems and other applications. A non-invasive acquisition synchronization technique that reduces the load on the database side is necessary.
Disclosure of Invention
The invention aims to provide a method, a system and equipment for collecting and synchronizing non-tilting mode data, which can reduce the load on a database terminal.
In order to achieve the above object, the present invention provides the following solutions:
A non-tilting mode data acquisition and synchronization method comprises the following steps:
obtaining database logs of the multi-type database by utilizing Debezium plug-ins;
collecting connectors according to the multi-type database registration Debezium;
Collecting connectors by using corresponding Debezium and temporarily storing database logs in a json form in a Kafka message queue;
The acquired json-form data was synchronized into mongo db using ketle.
Optionally, the multi-type database includes: oracle, sqlserver and Mysql.
Optionally, the collecting the connector by using the corresponding Debezium and temporarily storing the database log in json form in the Kafka message queue further includes:
The method comprises the steps of deploying Zookeeper, kafka components and Kafka Connector on a Linux system; debezium rely on the Kafka Connector component of the Kafka itself to implement functionality and Kafka relies on Zookeeper to coordinate management data.
Optionally, the collecting the connector according to the multi-type database registration Debezium specifically includes:
Determining an acquisition range;
determining the change position of the acquisition range in the current database;
Data of the change location snapshot is determined.
Optionally, the collecting the connectors according to the multi-type database registration Debezium further includes:
Debezium after connector registration is successfully collected, 3 topics are generated in Kafka; the 3 topics are respectively used for storing Debezium information about the state, the configuration and the acquisition starting position of the acquisition connector.
A non-immersive data acquisition and synchronization system comprising:
The database log acquisition module is used for acquiring database logs of the multi-type database by utilizing Debezium plug-ins;
the collection connector registration module is used for collecting connectors according to the multi-type database registration Debezium;
the storage module is used for acquiring connectors by utilizing corresponding Debezium and temporarily storing the database logs in a Kafka message queue in a json form;
And the synchronization module is used for synchronizing the acquired json-form data into the MongoDB by using the Kettle.
A non-immersive data acquisition and synchronization device comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method.
Optionally, the memory is a computer readable storage medium.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
According to the non-tilting mode data acquisition and synchronization method, system and equipment provided by the invention, debezium plug-in units are utilized to acquire database logs of multiple types of databases, the change of events is captured according to the database logs, the data is not directly operated, the non-tilting mode is utilized to further reduce the load of the databases, and the acquired data has real-time performance and accuracy; furthermore, kafka is used as a message queue for temporarily storing data, has large throughput, and is very suitable for being used in a large-data-volume environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a non-tilting data acquisition and synchronization method according to the present invention;
Fig. 2 is a schematic diagram of a non-tilting data acquisition and synchronization method according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a method, a system and equipment for collecting and synchronizing non-tilting mode data, which can reduce the load on a database terminal.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
As shown in fig. 1, the method for collecting and synchronizing data in a non-tilting manner provided by the present invention includes:
s101, acquiring database logs of a multi-type database by utilizing Debezium plug-ins; the multi-type database includes: oracle, sqlserver and Mysql.
The strategies for Debezium collection are different for different types of databases, but the principle is consistent, change events are captured by utilizing database logs, and as only the logs need to be monitored instead of directly inquiring the databases, the load of the databases can be effectively reduced, and the databases of various types need to open corresponding authorities to collect data in a non-invasive manner.
The Oracle collection principle is to analyze the transaction operations described in the redo log and the archive log using a self-contained LogMiner (a log analysis tool provided by Oracle corporation from the end of the product 8 i). Debezium reads the redo or archive log of the resolved database through LogMiner, resulting in an ordered series of change operations of the database. The Oracle database needs to open an archive log (log for recording database operations) and a supplementary log (also called an additional log, and additional information can be added into the database log so as to support LogMiner to analyze the database log), and a database acquisition user with a certain authority is created.
The Sqlserver acquisition principle is that after cdc is started, a database generates a copy similar to the table structure by taking a table as a unit, and when the source table is subjected to insertion, update and deletion operations (DML operations), a transaction log records related operation records, and Debezium analyzes the copies of the table by reading and analyzing the copies of the table, so that a series of ordered change operations of the database are obtained. The Sqlserver database needs to start the database proxy service, then needs to start the database level CDC and the table level CDC in sequence, and creates a database acquisition user with certain authority.
The Mysql collection principle is to obtain database changes through a Mysql self-contained binlog (a binary log for recording all DML and DDL operations of Mysql). Debezium by reading the binlog log of the analytical database, a series of ordered change operations of the database are obtained. Mysql needs to open binlog, create a database collection user with certain rights.
S101 is a big premise of data collection, which is to provide Debezium with database logs required for corresponding database collection, so that debeizum and the like capture database changes from logs of various types of databases.
Debezium provides corresponding collection plug-ins (jar packages) for various databases, a plurality of plug-ins can be placed under the same directory, only plug-in paths need to be specified in a configuration file of a Kafka Connector, the Kafka Connector reads the corresponding plug-ins (Zookeeper, kafka, debezium) according to the paths when the Kafka Connector is started, corresponding classes (Java class) are configured according to the type of data to be collected when the collection Connector is registered Debezium, for example, the collection Oracle is "io.
S102, collecting connectors according to the multi-type database registration Debezium;
The method comprises the steps of deploying Zookeeper, kafka components and Kafka Connector on a Linux system; debezium rely on the Kafka Connector component of the Kafka itself to implement functionality and Kafka relies on Zookeeper to coordinate management data. Component installation starting sequence: zookeepr, kafka, kafka Connector.
And registering an acquisition Connector with the Kafka Connector through the REST API by using an API externally put by the Kafka Connector, connecting the Connector with the source database, calling a corresponding database acquisition method, and acquiring database logs.
S102 specifically comprises the following steps:
Determining an acquisition range; debezium provide flexible and variable data acquisition range configurations, which can be specified by black and white lists, regular expressions, multi-level configurations (library names, user names, table names, column names), and the like.
Determining the change position of the acquisition range in the current database; oracle marks the location by reading the system SCN (change number), sqlserver marks the location by reading the LSN (maximum log sequence number) in the transaction log, mysql is the binlog directly reading the current point in time. To obtain the starting point coordinate position of the acquisition, i.e. from what position and when the acquisition is started. There are two main uses for this coordinate bit: firstly, a Debezium connector needs to perform a series of initialization operations during registration, because the database is continuously changed in the actual production environment, the first steps of the initialization operation of the connector (the operations of determining the collected table, locking the table and the like) need to determine a static coordinate position, and after the subsequent operations (the operations of obtaining the table structure, releasing the lock and the like) are completed, the data after the coordinate point begins to be collected is returned; second, the function of starting acquisition from a previous moment can be achieved by modifying the position of the coordinates. Of course, the coordinate location needs to be within the range recorded by the database log. In summary, the coordinates of various types of databases are an important guarantee for collecting real-time and orderly data.
Data of the change location snapshot is determined.
According to different actual conditions, various connection acquisition configuration information is screened and modified, the well-determined acquisition connector is sent, data acquisition can be carried out according to your selection and configuration, and a plurality of connectors can be registered. Debezium after connector registration is successfully collected, 3 topics are generated in Kafka; the 3 topics are respectively used for storing Debezium information about the state, the configuration and the acquisition starting position of the acquisition connector. The Debezium connectors have such a parameter "snapshot. Mode", so that a user can configure the acquisition mode according to the own needs, and the connectors can acquire according to the acquisition mode configured by you, and each connector can only realize one acquisition mode. The collection is roughly classified into the following three types according to different snapshot modes:
initial_only (full-scale mode acquisition), only historical data at a certain moment is acquired, and subsequent data changes are not acquired.
Scheme_only (incremental mode acquisition), only the change data after a certain moment is acquired, and the history data is not acquired.
Initial (full+incremental mode collection), firstly, collecting historical data at a certain moment, after the historical data collection is finished, continuously collecting subsequent changes at snapshot time points of the historical data collection, and ensuring that the data cannot be missed.
Collecting connector samples and partial parameter description:
curl-i-X POST-H"Accept:application/json"-H"Content-Type:application/json"http://192.168.33.8:8085/connectors/-d'
{
"name":"yangli",
"config":{
"connector.class":"io.Debezium.connector.oracle.OracleConnector",
"tasks.max":"1",
"database.server.name":"yangli",
"database.hostname":"192.168.33.8",
"database.port":"1521",
"database.user":"user",
"database.password":"password",
"database.dbname":"yangli",
"snapshot.mode":"schema_only",
"database.history.kafka.bootstrap.servers":"192.168.33.8:9092","database.history.kafka.topic":"schema-changes.inventory",
"table.include.list":"test.test"
}
}'
Chinese interpretation of the above parameters is shown in Table 1:
TABLE 1
Topic is generated in table units according to a fixed naming rule (servername. Schema. Table name), data is stored in a json serialization form in Topic of KafkaBroker (Kafka server), and data is stored in order according to data Offset in each Partition (message Partition).
S103, collecting connectors by using corresponding Debezium and temporarily storing database logs in a json form in a Kafka message queue;
S104, synchronizing the acquired json-form data into MongoDB by using Kettle.
S104, dividing the data into two modules, namely a data reading module, and reading acquired data by using a Kafka Consumer consumption component of a Kettle self-contained; and secondly, a data writing module which uses JDBC in Kettle to connect a target library (MongoDB) and write data after reading, and sequentially consumes the data through an Offset value in Kafka.
The Kafka Consumer component in the read Module also needs to be configured with the following parameters:
1.Bootstrap servers (Kafka service address): the configured kafka address has a port number of 9092.
Topic (topic name): topic names that need to be consumed can be configured for multiple Topic.
Consumer Group (Consumer Group name): custom consumer group names.
Duration (process batch interval): and (3) limiting the data reading time, and suspending reading when the data reading time meets the condition, and reading the next batch of data after the batch of data processing is finished.
Numberofrecoeds (number of batches processed): the number of pieces of data provided for the subsequent processing is that each batch of data is recorded with the maximum value of Offset in the batch of data when entering the subsequent processing flow, and the maximum value of Offset is used as the initial reading position of the next batch of data. When the data quantity reaches the configured value, the reading is also suspended, and the next batch of data is read after the batch of data processing is finished. This configuration and Duration limit the amount of data read from each other, and one of these two conditions is satisfied, and the read module is temporarily stopped and goes to the next process module.
6.Maximum concurrentbatches (maximum concurrency batch): support high concurrent consumption.
7.Messageprefetch limit (data prefetch limit): a limitation of the amount of data read. Since the read speed tends to be greater than the processing speed, this value is to prevent the Kettle from collapsing due to too large an amount of read data.
Offsetmanagement (offset management): there are two options to set the offset commit mode, one is to commit the consumed offset when the data is read, and the second is to commit the offset when the batch data is processed.
The description of the write module and the processing logic are as follows:
Data is stored in the form of key-value pairs in Kafka. There are the following 4 fields in Kafka: offset, key (data identifier), value (data content), timestamp (time stamp).
Offset: each piece of data in Kafka has a continuous serial number that uniquely identifies a piece of data in the Partition. Since Kafka itself is a message queue, the value of Offset increases piece by piece according to the time when data enters the queue.
Key: the column holds the primary key or unique index value of the data collected by Debezium.
Valus: the column stores the json serialized data content collected by Debezium, and has six objects of before (state before data operation), after (state after data operation), source (data source), op (data operation), ts_ms (timestamp), transaction, and four objects of after, source, op and ts_ms are processed by the writing module. Where the op object has four data, c (new), u (updated), d (deleted), r (queried, the type of operation will only occur if full collection,
The writing module performs unified operation on the four types of data operation, wherein a set is determined according to db, schema and table attributes of a source object in Valus, a main Key or index provided in the Key finds out data in the set and deletes the data, then the data is newly added, and the newly added content is written according to a value in an after. The writing process is as follows: and importing a MongoDB driving package into the Kettle file, and connecting the MongoDB library by using the JDBC. Firstly, modifying the value in Key (only retaining primary Key information), converting the messages of Key and Valus into Doc ument objects, then respectively processing according to the values (r, c, d, u) of op, wherein the four operation types are that a set is found firstly, then a unique index is used for finding out data, mongoD B.getCollection (). DeleteOne () is called to delete the data in a library, then a MongoDB.getCollection (). InsertOne () method is called to write the data to MongoDB, and corresponding op_and op_ts_ms (Timestamp) fields in kafka are added after the data. The d operation is uniquely different from the other three operations in that when writing data, all the other three operations write afte r values, and the d operation writes the value of the before data into a new set (the new set name is db_schema_table_delete_reserves the operation trace). However, the types of data are various, different processing results need to be given according to different data, and different results are output in this step for various types of data, for example, data without a primary key or index is not processed, and data with a primary key or index is not processed, but data with a null is also not processed. Which set of MongoDB to write data into is determined by db, sche ma and table in source object, when the data to be written does not exist in MongoDB library, the data writing module creates set and index according to the value of object in the data Key and attribute in source object, and the set name in MongoDB is spliced by three attributes of db_schema_table. Mysql is special, and there is no schema attribute in source, so the set name is named db_db_table. The write module will also add two fields, op_and op_ts_ms, after each piece of data during the write of the after value to indicate the type of operation of the data and the timestamp at which Debzium processed the data.
Timestamp: this column holds the time Debezium that the data was collected, i.e., the time the data entered the Kafka message queue.
Valus message body sample:
As shown in fig. 2, the structure can be generally divided into a production end and a consumption end.
The production end is divided into a source end database and a data acquisition module, and three components of Kafka, kafka Connector and Zookeeper form the data acquisition module together from a structural diagram. Wherein Kafka Connector uses Debezium to collect logs of source database, kafka is responsible for temporarily storing Debezium collected data, and Zookeeper is responsible for management, for example: configuration information of Kafka nodes, up and down; creating, deleting and partitioning Topic; management of consumption groups, internal coordination, etc.
The consumer end adopts a consumer subscription mode, and a plurality of consumers can exist in the consumer group, so that the throughput of consumption is increased, but the consumers in the same consumer group can not consume the same Partition in the same Topic, so that the problem of repeated consumption is avoided. The subscription mode is that, if a plurality of consumers subscribe to a Topic, all subscribed consumers are pushed with the information of the Topic as long as the Topic generates data.
Corresponding to the method, the invention also provides a non-tilting mode data acquisition and synchronization system, which comprises:
And the database log acquisition module is used for acquiring database logs of the multi-type database by utilizing Debezium plugins.
And the collection connector registration module is used for collecting connectors according to the multi-type database registration Debezium.
And the storage module is used for acquiring the connector by using the corresponding Debezium and temporarily storing the database log in a json form in the Kafka message queue.
And the synchronization module is used for synchronizing the acquired json-form data into the MongoDB by using the Kettle.
In order to execute the method corresponding to the above embodiment to achieve the corresponding functions and technical effects, the present invention further provides a non-tilting type data acquisition and synchronization device, including: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method.
The memory is a computer-readable storage medium.
Based on the above description, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention. And the aforementioned computer storage medium includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a read-only memory, a random access memory, a magnetic disk or an optical disk.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.
Claims (8)
1. A method for collecting and synchronizing data in a non-immersive manner, comprising:
obtaining database logs of the multi-type database by utilizing Debezium plug-ins;
collecting connectors according to the multi-type database registration Debezium;
Collecting connectors by using corresponding Debezium and temporarily storing database logs in a json form in a Kafka message queue;
The acquired json-form data was synchronized into mongo db using ketle.
2. The method for collecting and synchronizing non-immersive data according to claim 1, wherein the multi-type database comprises: oracle, sqlserver and Mysql.
3. The method for non-immersive data collection and synchronization according to claim 1, wherein said collecting the connector and temporarily storing the database log in json form in the Kafka message queue by using the corresponding Debezium, further comprises:
The method comprises the steps of deploying Zookeeper, kafka components and Kafka Connector on a Linux system; debezium rely on the Kafka Connector component of the Kafka itself to implement functionality and Kafka relies on Zookeeper to coordinate management data.
4. A method for collecting and synchronizing non-immersive data according to claim 3, wherein said collecting the connector according to the multi-type database registration Debezium comprises:
Determining an acquisition range;
determining the change position of the acquisition range in the current database;
Data of the change location snapshot is determined.
5. The method for collecting and synchronizing non-immersive data according to claim 1, wherein the collecting the connector according to the multi-type database registration Debezium further comprises:
Debezium after connector registration is successfully collected, 3 topics are generated in Kafka; the 3 topics are respectively used for storing Debezium information about the state, the configuration and the acquisition starting position of the acquisition connector.
6. A non-immersive data acquisition and synchronization system comprising:
The database log acquisition module is used for acquiring database logs of the multi-type database by utilizing Debezium plug-ins;
the collection connector registration module is used for collecting connectors according to the multi-type database registration Debezium;
the storage module is used for acquiring connectors by utilizing corresponding Debezium and temporarily storing the database logs in a Kafka message queue in a json form;
And the synchronization module is used for synchronizing the acquired json-form data into the MongoDB by using the Kettle.
7. A non-immersive data acquisition and synchronization device comprising: at least one processor, at least one memory, and computer program instructions stored in the memory, which when executed by the processor, implement the method of any one of claims 1-5.
8. The non-immersive data acquisition and synchronization device of claim 7, wherein the memory is a computer-readable storage medium.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410278074.5A CN118170843A (en) | 2024-03-12 | 2024-03-12 | Non-tilting mode data acquisition and synchronization method, system and equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410278074.5A CN118170843A (en) | 2024-03-12 | 2024-03-12 | Non-tilting mode data acquisition and synchronization method, system and equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118170843A true CN118170843A (en) | 2024-06-11 |
Family
ID=91355803
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410278074.5A Pending CN118170843A (en) | 2024-03-12 | 2024-03-12 | Non-tilting mode data acquisition and synchronization method, system and equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN118170843A (en) |
-
2024
- 2024-03-12 CN CN202410278074.5A patent/CN118170843A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109800222B (en) | HBase secondary index self-adaptive optimization method and system | |
| JP6266630B2 (en) | Managing continuous queries with archived relations | |
| CN111324610A (en) | Data synchronization method and device | |
| US11544229B1 (en) | Enhanced tracking of data flows | |
| CN112883125A (en) | Entity data processing method, device, equipment and storage medium | |
| CN111752920A (en) | Method, system and storage medium for managing metadata | |
| CN110457279B (en) | Data offline scanning method and device, server and readable storage medium | |
| CN116501700B (en) | APP formatted file offline storage method, device, equipment and storage medium | |
| CN114547206A (en) | Data synchronization method and data synchronization system | |
| CN114661823A (en) | Data synchronization method, apparatus, electronic device and readable storage medium | |
| CN114138247A (en) | Interface management method and system suitable for multiple systems | |
| CN110389939A (en) | An IoT storage system based on NoSQL and distributed file system | |
| CN111061802B (en) | Power data management processing method, device and storage medium | |
| CN116186082A (en) | Data summarizing method based on distribution, first server and electronic equipment | |
| CN115952142A (en) | System, method, device, processor and storage medium for realizing transaction log storage and message information extraction and summarization in trusted environment | |
| CN118170843A (en) | Non-tilting mode data acquisition and synchronization method, system and equipment | |
| CN118069750A (en) | Data processing method and device | |
| US11914655B2 (en) | Mutation-responsive documentation generation based on knowledge base | |
| CN116955492A (en) | A data synchronization method, system, device, equipment and storage medium | |
| CN113553320B (en) | Data quality monitoring method and device | |
| CN117407463A (en) | DDL monitoring processing-based data synchronization method and system | |
| CN116820874A (en) | Enterprise-level big data component and method for monitoring and alarming application | |
| CN115982231A (en) | Distributed real-time search system and method | |
| CN115051981B (en) | Zookeeper-based asynchronous downloading method and device | |
| US20250202992A1 (en) | Zero-byte filename-based telemetry |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |