HK40009099A - Method and system for double anonymization of data - Google Patents
Method and system for double anonymization of data Download PDFInfo
- Publication number
- HK40009099A HK40009099A HK19132547.1A HK19132547A HK40009099A HK 40009099 A HK40009099 A HK 40009099A HK 19132547 A HK19132547 A HK 19132547A HK 40009099 A HK40009099 A HK 40009099A
- Authority
- HK
- Hong Kong
- Prior art keywords
- computing system
- data
- identifier
- anonymization
- hash
- Prior art date
Links
Description
Cross Reference to Related Applications
This application claims benefit and priority from U.S. patent application No. 62/397,828 filed on 21/9/2016. The entire disclosure of the above application is incorporated herein by reference.
Technical Field
The present disclosure relates to dual anonymization of data, in particular, the use of multiple anonymization processes on data performed by separate and distinct computing systems to ensure the highest level of privacy while maintaining the availability of anonymous data.
Background
Data is collected on individuals and other entities on a daily basis. In many cases, individuals may not like to provide their associated data to other entities or use it in a particular manner. In this context, systems have been developed to anonymize data so that personally identifiable data, which may identify the relevant individual, may be removed, encrypted, hashed or otherwise obscured so that non-personally identifiable data may be freely used without sacrificing the privacy of the individual. Such a Method for anonymizing data for preserving personal privacy and security is described in U.S. patent No. 9,123,054 entitled "Method and System for preserving privacy in Scoring of Consumer Spending Behavior" to Curtis Villars et al, which is incorporated herein by reference in its entirety.
While these methods may be useful for anonymizing various data for various purposes, in some cases, these existing processes may not be sufficient. For example, some government agencies and other entities may require greater privacy and security than is currently available. In addition, individuals and other entities are very interested in ensuring that their data is as anonymous as possible to ensure their personal safety and security. Therefore, there is a need for a technical solution to provide greater anonymization of data such that the data does not contain any personally identifiable information and yet maintains a high level of security and privacy without sacrificing usability.
Disclosure of Invention
The present disclosure provides a description of a system and method for dual anonymization of data. The systems and methods discussed herein use a plurality of different computing systems that are independent of and distinct from each other, each of which performs an anonymization process to ensure that data is sufficiently anonymized while also ensuring that none of the computing systems retain, or are in some cases contacted by, information of an identifiable individual. In some cases, the dual anonymized data may even be stored and hosted by a third computing system, further increasing the privacy and security provided by the systems and methods discussed herein.
A method for dual anonymization of data, comprising: receiving, by a receiving device of a first computing system, a plurality of first data sets, each first data set including at least a set identification and including information identifiable as an individual; anonymizing, by the first computing system, each first data set, wherein anonymizing includes at least replacing a set identifier included in each first data set with a hash identifier and de-identifying information identifiable of the individual, wherein the hash identifier is generated by applying one or more hash algorithms to at least the corresponding set identifier; electronically transmitting, by a transmitting device of the first computing system, the plurality of anonymized first sets of data to a receiving device of a second computing system, wherein the second computing system is distinct and separate from the first computing; anonymizing, by the second computing system, each anonymized first data set, wherein anonymizing includes at least replacing the hashed identifiers with double-hashed identifiers, the double-hashed identifiers being generated by applying one or more hashing algorithms to at least the corresponding hashed identifiers; and storing the plurality of doubly anonymized first data sets in the second computing system or a separate and distinct third computing system.
A system for dual anonymization of data includes a first computing system; a second computing system, wherein the first computing system is configured to receive, by a receiving device of the first computing system, a plurality of first data sets, each first data set including at least a set identifier and including information identifiable of an individual, anonymize each first data set, wherein anonymizing includes at least replacing the set identifier included in each first data set with a hash identifier and de-identifying the information identifiable of the individual, wherein the hash identifier is generated by applying one or more hash algorithms to at least the corresponding set identifier, and transmit, by a transmitting device of the first computing system, the plurality of anonymized first data sets to a receiving device of the second computing system, wherein the second computing system is distinct and separate from the first computing system, and the second computing system is configured to anonymize each anonymized first data set, wherein anonymizing includes at least replacing the hashed identifiers with double-hashed identifiers generated by applying one or more hashing algorithms to at least corresponding hashed identifiers, wherein the second computing system or a third separate and distinct computing system is configured to store a plurality of double-anonymized first data sets.
Drawings
The scope of the present disclosure is best understood from the following detailed description of exemplary embodiments when read in connection with the accompanying drawings. Included in the drawing are the following figures:
FIG. 1 is a block diagram illustrating a high-level system architecture for data double anonymization in accordance with the illustrative embodiments.
Fig. 2 is a block diagram illustrating a processing server configured to perform the functions of the first and second anonymization systems of the system of fig. 1, according to an exemplary embodiment.
Fig. 3 is a flow diagram illustrating a process of double anonymizing and hosting data in the system of fig. 1, according to an example embodiment.
Fig. 4 is a flowchart illustrating a process of using double anonymized data using the system of fig. 1 according to an example embodiment.
Fig. 5 is a flowchart illustrating an exemplary method for data double anonymization according to an exemplary embodiment.
FIG. 6 is a block diagram illustrating a computer system architecture in accordance with an illustrative embodiment.
Further areas of applicability of the present disclosure will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description of the exemplary embodiments is for purposes of illustration only and is not intended to necessarily limit the scope of the disclosure.
Detailed Description
Summary of terms
Information (PII) that can identify individuals-PII includes information that can be used alone or with other sources to uniquely identify an individual. Information that may be considered to be identifiable as an individual may be defined by a third party, such as a governmental agency (e.g., the U.S. federal trade commission, the european union commission, etc.), a non-governmental organization (e.g., the electronic front edge foundation), an industry habit, a consumer (e.g., through consumer research, contract, etc.), a codified law, regulation, or regulation, and so forth. The present disclosure provides methods and systems in which the first anonymization system 102 and the second anonymization system 104 do not have any personally identifiable information. Systems and methods for anonymizing potentially personally identifiable information, such as bucketing (bucketing), as will be apparent to those skilled in the art, may be used. The barreling can include aggregating information (e.g., age, income, etc.) that otherwise may be personally identifiable into a bucket (e.g., grouping) so that the information is not personally identifiable. For example, a consumer who earns $ 65,000 over age 26 may be unique in a particular environment for that consumer, represented by an age bucket of 21-30 years and an income bucket of $50,000 to $ 74,999, both of which may represent most other consumers and thus are no longer personally identifiable to that consumer. In other embodiments, encryption may be used. For example, information (e.g., an account number) that may identify a person may be encrypted (e.g., using one-way encryption) such that the first anonymization system 102 and the second anonymization system 104 do not possess PII or are unable to decrypt the encrypted PII. Other information about PII anonymization may be anonymized at the uk information specialist office: management Data protection Risk operation Specification (United Kingdom's information Commission's Office's acceptance: Managing Data protection Risk Code of Practice), which is incorporated herein by reference in its entirety.
Payment network-a system or network for transferring cash at a given time through the use of cash substitutes for thousands, millions, or even billions of transactions. The payment network may use a variety of different protocols and procedures to handle cash transfers for various types of transactions. Transactions that may be performed over a payment network may include product or service purchases, credit purchases, debit transactions, funds transfers, account withdrawals, and the like. The payment network may be configured to perform transactions via cash substitutes, which may include payment cards, credit cards, checks, transaction accounts, and the like. Examples of networks or systems configured to implement a payment network includeAmericanEtc. of the network or system in operation. Here make theThe term "payment network" may be used to refer to a payment network as an entity, as well as a physical payment network, such as the devices, hardware, and software that make up the payment network.
Payment track-an infrastructure associated with a payment network that processes thousands, millions, or even billions of transactions in a given period of time, for processing payment transactions and the communication of transaction messages and other similar data between the payment network and other entities interconnected with the payment network. The payment track may include hardware for establishing a payment network and interconnections between the payment network and other related entities (e.g., financial institutions, gateway processors, etc.). In some cases, the payment track may also be affected by software, for example by special programming of the communication hardware and devices that make up the payment track. For example, the payment track may include a specially configured computing device specifically configured for routing transaction messages, which may be specially formatted data messages that are electronically transmitted through the payment track, as discussed in more detail below.
System for data double anonymization
FIG. 1 illustrates a system 100 for dual anonymization of data to provide a significantly high level of privacy and security to associated individuals while maintaining the availability of the data.
The system 100 may include a first anonymization system 102. The first anonymization system 102 may include one or more processing servers, such as processing server 200 discussed in more detail below, and is configured to perform a first anonymization process on data received therefrom. The first anonymization system 102 may receive multiple data sets to be anonymized, where each data set may include a set identifier and information that may identify an individual. The first anonymization process may include de-identification of each data set to remove or obscure information identifiable to individuals, and replacing the set identifier with a hash value.
The set identifier may be a unique value associated with a data set for identifying the data set. For example, the set identifier may be a user identification number, an email address, a username, a telephone number, a registration number, or other suitable value. Replacing the set identifier with the hash value may involve hashing the set identifier by the first anonymization system 102 applying one or more hashing algorithms to the set identifier. In an exemplary embodiment, the one or more hashing algorithms may include: an anti-collision algorithm such that it does not produce two identical hash values, such as secure hash algorithm 2. In some embodiments, the first anonymization system 102 may be configured to discard the set identifier when generating the corresponding hash identifier. For example, the first anonymization system 102 may be configured to hash the set identifier immediately upon receipt thereof, such that the set identifier is not completely received and never retained.
De-identification of the identifiable individual's information may involve a process that uses information intended to remove, disguise or otherwise obscure the identifiable individual's information contained in the data set. In a first example, the first anonymization system 102 may be configured to replace the personal identification data value in the data set with a variable to disguise the processed data value. For example, if the data set includes demographic characteristics of individuals, including age groups for each individual, each age group may be replaced by a corresponding variable (e.g., under 25 years old with "A", 26-35 with "B", 36-45 with "C", etc.). The first anonymization system 102 may be configured to disguise all personally identifiable information in each data set, and in some cases, all data values in the data set. In some embodiments, replacing the set identifier with the hash identifier may include de-identification of personally identifiable information of the data set. In some cases, in the methods discussed herein, the first anonymization system 102 may enable individuals to exit from use of their data.
In some cases, the first anonymization system 102 may include multiple different processing servers, which may be organized into separate computing environments for additional privacy and security in the de-identification and anonymization of data sets. Methods for de-identifying data sets using multiple computing environments in a single System are discussed in more detail in U.S. patent No. 9,123,054 entitled "Method and System for mapping Privacy in screening of Consumer specialization behavors" to Curtis Villars et al, which is incorporated herein by reference in its entirety. In some cases, the first anonymization system 102 may utilize a hardware security module in the operation of one or more of the functions discussed herein.
Once the first anonymization system 102 anonymizes the data set, the first anonymization system 102 may electronically transmit the anonymized data set to the second anonymization system 104 for performing a second anonymization process thereon. The second anonymization system 104 can be a distinct and independent computing system, which can include one or more processing servers, such as processing server 200 discussed in more detail below. In an exemplary embodiment, the second anonymization system 104 may be controlled by an entity separate from the entity controlling the first anonymization system 102, e.g., with both systems operated by different business entities. In such embodiments, an entity controlling one anonymization system is therefore unable to control another anonymization system. In other embodiments, the entities controlling the anonymization systems may be partners or have any other suitable type of protocol in which the anonymization systems may cooperate, but the anonymization systems operate independently. In some cases, the operating infrastructure of the first anonymization system 102 and the second anonymization system 104 may be based on and comply with applicable rules and regulations of the jurisdiction in which the systems are located. This includes potentially physically separating and/or separating the first anonymization system 102 from the second anonymization system 104 into separate and independent legal entities. Independence has certain legal definitions within the relevant jurisdictions, particularly laws relating to managing individual privacy and its data. This independence can be achieved by minority-ethnic subsidiaries, vendor hosting companies, vendor license holders, and/or third party companies. Another mode is whether the second anonymizing system 104 is held by a trusted company, which would allow the entity holding the first anonymizing system 102 to become a beneficiary of such trust. A trust structure would have the benefit of allowing multiple first anonymization systems 102, each first anonymization system 102 being held by a different trust beneficiary, which may provide data to a second anonymization system 104 held by the trust. In some embodiments, the first anonymization system 102 may discard the data values included in each data set after transmission to the second anonymization system 104.
The second anonymization system 104 may receive the anonymized data set and may be configured to perform a second anonymization process on the data set for dual anonymization of the data. The second anonymization process may include at least replacing the hashed identifier with a double hashed identifier. The double hash identifier may be generated by applying one or more hash algorithms to the hash identifiers included in each respective data set. In some embodiments, one or more hashing algorithms identical to those that generate the hashed identifiers may be used. In other embodiments, the second anonymization system 104 may use at least one different hashing algorithm. In some cases, the one or more hashing algorithms may be collision resistant.
The second hash of the set identifier used to generate the double-hashed identifier may ensure that the first anonymization system 102 cannot match the identifier back to the original data because the first anonymization system 102 cannot match the double-hashed identifier to the hashed identifier. Similarly, the second anonymization system 104, which never receives any personally identifiable information or original set identifier, cannot match the hashed or doubly hashed identifier with the set identifier or personally identifiable information. After generating the double-hashed identifier by the second anonymization system 104, the second anonymization system 104 may electronically send the double-anonymized data set to the hosting entity 106. The hosting entity 106 may be a separate and distinct entity from the entities controlling the first anonymization system 102 and the second anonymization system 104.
The hosting entity 106 may receive a double anonymized data set that includes a double hashed identifier and other data values. The hosting entity 106 may then store the data set in one or more databases included therein or accessible thereby. In some embodiments, the second anonymization system 104 may discard the data values included in each data set, and in some cases, may also discard the hashed identifiers and the double hashed identifiers. In such embodiments, the only entity that possesses the de-identified data value is the hosting entity 106, which may not receive or possess any collection identifiers, hash identifiers, or personally identifiable information.
In some embodiments, the second anonymization system 104 may operate as the hosting entity 106 such that the double-hashed identifiers and corresponding data are stored by the second anonymization system 104. In such embodiments, the second anonymization system 104 may be controlled by a separate entity than the entity controlling the first anonymization system. Also in such embodiments, the generation of the double hash identifier may be performed upon receipt of the anonymous data set (e.g., by the second anonymization system 104 or by a third-party computing system located between the first anonymization system 102 and the second anonymization system 104) such that the second anonymization system does not receive or possess the hash identifier. In such an automated process, the anonymization process may be established in a manner unknown to anyone, for example by using randomized SALT during hashing.
To access data, the requesting entity 108 may submit a request for data to the first anonymization system 102 or the second anonymization system 104. The data request may include one or more set identifiers or hash identifiers and may indicate one or more data values requested to be associated therewith. For example, in the example above, the data request may be a list of twenty set identifiers for which each available demographic is to be requested. If applicable, the first anonymization system 102 may generate a hashed identifier corresponding to the set identifier, and may forward the hashed identifier and the requested data value to the second anonymization system 104. The second anonymization system 104 may generate a double hash identifier for each hash identifier for internal use within the second anonymization system 104, and may query the hosting entity 106 for a corresponding data value. In some embodiments, the hosting entity 106 may provide an Application Programming Interface (API) through which the second anonymization system 104 may query for data values. The second anonymization system 104 may receive and provide the data value to the requesting entity 108, the data value satisfying a selected anonymization criterion controlled by the control layer. In some embodiments, to ensure anonymity of the data, the second anonymization system 104 may not provide the requesting entity 108 or the first anonymization system 102 with a set identifier or personally identifiable information. In some cases, the first anonymization system 102 and/or the second anonymization system 104 may use one or more different hashing algorithms for each data request received from the requesting entity 108.
In some cases, where re-identification of anonymous individuals is permitted, the requesting entity 108 may be configured to match the data value with a set identifier. In this case, the requesting entity 108 may use a hashing algorithm used by the first anonymization system 102 and the second anonymization system 104 to generate a double hashed identifier for each set identifier for which data is requested. The requesting entity 108 may receive the data value and its corresponding double-hashed identifier from the second anonymization system 104, and may then use the double-hashed identifier to match the data value to the set identifier. As a result, the requesting entity 108 may identify a data value for a set identifier, which may include the use or possession of personally identifiable information, without any of the first anonymizing system 102, the second anonymizing system 104, or the hosting entity 106 processing personally identifiable information.
In some embodiments, the hosting entity 106 (e.g., or the second anonymization system 104 if used to store data values) may include a control layer. The control layer may be configured to enforce compliance with application rules, standards, regulations, etc., regarding anonymization of data and its availability. For example, the control layer may be configured with a plurality of rules regarding aggregation, anonymization, and availability of data provided to the first anonymization system 102. For example, the control layer may have rules to ensure that data values obtained from the hosting entity 106 are identified from at least a predetermined number of data sets (e.g., demographics from at least 25 people in a given zip code) that may also be aggregated to ensure privacy of the relevant individuals. In some cases, the control layer may use different rules for the identified data values for the number of data values and/or data sets for which the requesting entity 108 requests data (e.g., a government agency may have different rules than a private person and different data types may have different rules). In some cases, the control layer may be configured to perform additional processing on the identified data values. For example, the control layer may match (e.g., explicitly or infer) data to publicly or privately available external data (e.g., match transaction data associated with a geographic location to public census data for the geographic location), may provide analysis of the identified data values, may provide a visual representation of the identified data values (e.g., a map or graph as shown, a visual comparison, a report, etc.), or generate a model of the identified data values, e.g., for predicting future data values.
The control layer may be configured to remove unique outliers in the data set when loading the data, including assessing the need to scramble the data by K anonymity (e.g., sufficiently large snippets of individual records, e.g., to ensure that five people have traded on a particular day at a particular merchant) and L anonymity (e.g., snippets are not too unique, e.g., including at least 120 people), and, if scrambling the data is needed, scrambling the data to meet the level required for a specified segment size and segment commonality. The control layer may also be configured to apply other data modification techniques to the loaded data, such as data re-encoding (e.g., de-identifying columns in a table), rounding the data to limit the uniqueness of the data, creating micro-clusters (e.g., micro-segments) based on the uniqueness of the data, then creating a cluster set identifier and deleting the unique identifiers of all records in the cluster. The control layer may also test the sufficiency of anonymization measures through differential privacy analysis, including setting a tolerance for differential privacy for each data set.
In some embodiments, the data that is double anonymized in system 100 may be transaction data. The transaction data may be data related to an electronic payment transaction involving one or more individuals or entities, and may include, for example, a primary account number, a transaction amount, a transaction time, a transaction date, a currency type, a geographic location, a merchant identifier, a merchant name, a merchant category code, product data, merchant data, consumer data, offer data, reward data, loyalty data, and the like. The transaction data may be captured by the payment network 110, and the payment network 110 may be configured to process payment transactions using conventional methods and systems. The payment network 110 may capture the data and may then electronically transmit the data to the first anonymization system 102 for double anonymization. In some embodiments, the payment network 110 may electronically transmit transaction data through a payment track associated therewith. In some cases, the first anonymization system 102 may be part of the payment network 110.
In some cases, the transaction data may be transmitted electronically in a transaction message. The transaction message may be a specially formatted data message formatted according to one or more standards governing the exchange of financial transaction messages, such as the international organization for standardization ISO 8583 or ISO 20022 standards. The transaction message may include a message type indicator indicating a message type, such as an authorization request or an authorization response, and a plurality of data elements, wherein each data element is configured to store transaction data for a related payment transaction. In some embodiments, the transaction message may further include one or more bitmaps, each bitmap configured to indicate data elements included in the transaction message and data stored therein.
In such embodiments, the first anonymization system 102 may receive transaction data in which the primary account number in each transaction may be a set identifier and include information that may identify an individual. The first anonymization system 102 may hash the primary account number and provide the transaction data with the hashed primary account number to the second anonymization system 104 for double hashing. The double-hashed identifier and corresponding transaction data may then be sent to the escrow entity 106 for storage. In this case, the requesting entity 108 (e.g., a financial institution that requires more information about the individual) may provide the primary account number of the transaction account to the first anonymization system 102. The first anonymization system 102 may hash the primary account number and provide the hashed number to the second anonymization system 104, and the second anonymization system 104 may generate and provide the double-hashed identifier to the hosting entity 106 (e.g., through an API) and receive the transaction data in return. In cases where anonymity may be less important, the second anonymization system 104 may provide the transaction data and the double-hashed identifier to the requesting entity 108, and the requesting entity 108 may verify that the double-hashed identifier matches the primary account number (e.g., by performing both hashes themselves).
In some embodiments, the first anonymization system 102 and/or the second anonymization system 104 may be configured to use the salt when performing the respective hash. The salt may be a value that is combined with the hashed value to further obfuscate the value being hashed. For example, if the hashing algorithm or algorithms used by the first anonymization system 102 and the second anonymization system 104 are the same, the first anonymization system 102 will be able to identify the double-hashed identifier itself, which may compromise compliance with privacy and security regulations. However, the second anonymization system 104 is unaware of the use of the salt by the first anonymization system 102 (the second anonymization system 104 is unaware of the use of the salt), which may ensure that the first anonymization system 102 cannot recognize the double-hashed identifier. In such embodiments, the requesting entity 108 configured to match the data value with the information that may identify the person may be provided with the salt used by the first anonymization system 102 and the second anonymization system 104 to identify the double hash identifier for matching.
In some embodiments, the hosting entity 106 (e.g., or the second anonymization system 104, if applicable) may be configured to aggregate data values received from different anonymization systems and/or data sources. For example, the hosting entity 106 may receive dual anonymized data sets from different second anonymization systems 104, or may receive multiple dual anonymized data sets from a single second anonymization system 104, such as where the first anonymization system 102 may receive data from multiple sources. In such embodiments, the hosting entity 106 may aggregate the double-anonymized data sets such that data from multiple data sets may be used for queries by data requests from the requesting entity 108. For example, the hosting entity 106 may receive a double anonymized data set consisting of transaction data for payment transactions and a double anonymized data set consisting of demographic characteristics of individuals, wherein both data sets include a geographic location as one of the data values. In such an example, the requesting entity 106 may obtain data that includes transaction data and demographic characteristics via geographic location matching. In some cases, the control layer may have different rules for different data sets aggregated, such that data values from one data set may be restricted for the requesting entity 108, or data may be aggregated before being combined.
In some instances where data sets may be aggregated, the hosting entity 106 (e.g., or the second anonymization system 104, if applicable) may be configured to establish rules regarding revenue for the data sources aggregating the data sets. For example, in the example above, a different revenue share (revenueshare) may be provided for the data source of the transaction data than for the data source of the demographic. In this case, the revenue share may be determined according to the agreement of the data source, the number of contributing data sets, the value of the contributing data sets, the access rate of the contributing data sets, and the like. For example, if the access volume of the transaction data far exceeds the demographic characteristics, the data source of the transaction data may obtain a higher revenue share.
The methods and systems discussed herein provide for the use of dual anonymization to provide a higher level of privacy and security in data storage and identification than provided in conventional systems. By using two separate and distinct entities to perform separate anonymization processes and then storing the data in a third separate and distinct entity, it is ensured that neither the second anonymization system 104 nor the hosting entity 106 owns and it is not possible to match any data with information that can identify individuals. In addition, the anonymization process used by the first anonymization system 102 ensures that the unique data obtained and possibly retained thereby cannot be matched with data that can identify an individual, such that after the data is received and anonymized, it cannot be matched with personally identifiable data except by an authorized third party (e.g., the requesting entity 108).
Processing server
Fig. 2 illustrates an embodiment of a processing server 200 in the system 100. It will be apparent to those skilled in the relevant art that the embodiment of processing server 200 shown in FIG. 2 is provided by way of illustration only and may not be exhaustive of all possible configurations of processing server 200 suitable for performing the functions described herein. For example, the computer system 600 shown in fig. 6 and discussed in more detail below may be a suitable configuration of the processing server 200. The processing server 200 may be included in the first anonymization system 102 and/or the second anonymization system 104, and configured to perform functions associated with the anonymization systems. In some cases, the first anonymization system 102 and/or the second anonymization system 104 may include multiple processing servers 200, which in some cases may be separated into separate and distinct computing environments to provide greater privacy and security through each entity's respective anonymization process.
The processing server 200 may include a receiving device 202. Receiving device 202 may be configured to receive data over one or more networks via one or more network protocols. In some instances, the receiving device 202 may be configured to receive data from the first anonymization system 102, the second anonymization system 104, the hosting entity 106, the requesting entity 108, the payment network 110, the other processing servers 200, and other systems and entities via one or more communication methods, such as a cellular communication network, radio frequency, internet, local area network, and the like. In some embodiments, the receiving device 202 may include multiple devices, such as different receiving devices for receiving data over different networks, such as a first receiving device for receiving data over a local area network and a second receiving device for receiving data over the internet. The receiving device 202 may receive the electronically transmitted data signal, where the data may be superimposed or encoded on the data signal and decoded, parsed, read, or otherwise obtained by the receiving device 202 receiving the data signal. In some cases, the receiving device 202 may include a parsing module to parse the received data signal to obtain data superimposed thereon. For example, the receiving device 202 may include a parser program configured to receive data signals and convert the received data signals into usable input for execution by a processing device to implement the functions of the methods and systems described herein.
The receiving device 202 may be configured to receive a data signal electronically transmitted by a data providing entity (e.g., the payment network 110), which may be overlaid or otherwise encoded with a plurality of data sets, wherein each data set includes at least a set identifier, which may contain or be accompanied by information that may identify an individual, and an additional data value. The receiving device 202 may also be configured to receive data signals electronically transmitted by other processing servers 200, which other processing servers 200 may be part of separate and distinct computing systems (e.g., the first anonymizing system 102 for processing server 200, which processing server 200 is part of the second anonymizing system 104), which data signals may be superimposed with or otherwise encoded with an anonymous data set, which may not include any personally identifiable information, and may include a hash identifier instead of a set identifier. The receiving device 202 may also be configured to receive a data signal electronically transmitted by the requesting entity 108, which may be superimposed with or otherwise encoded with a data request, which may include a set identifier or a hash identifier. The receiving device 202 may also be further configured to receive a data signal electronically transmitted by the hosting entity 106, which may be superimposed or otherwise encoded with a data value and a corresponding double hash identifier.
The processing server 200 may also include a communication module 204. The communication module 204 may be configured to transmit data between modules, engines, databases, memories, and other components of the processing server 200 for performing the functions discussed herein. The communication module 204 may include one or more communication types and utilize various communication methods for communication within the computing device. For example, the communication module 204 may include a bus, contact pin connectors, wires, and the like. In some embodiments, the communication module 204 may also be configured to communicate between internal components of the processing server 200 and external components of the processing server 200, such as externally connected databases, display devices, input devices, and the like. The processing server 200 may also include a processing device. The processing device may be configured to perform the functions of the processing server 200 discussed herein, as would be apparent to one of ordinary skill in the relevant art, such as a processor configured to execute an intelligent contract, such as by executing an executable script associated therewith. In some embodiments, the processing device may include and/or be comprised of multiple engines and/or modules that are specifically configured to perform one or more functions of the processing device, such as the query module 208, the hash module 210, the generation module 212, and/or the like. As used herein, the term "module" may be software or hardware specifically programmed to receive input, perform one or more processes using the input, and provide output. Inputs, outputs, and processing performed by the various modules will be apparent to those skilled in the art based on this disclosure.
The processing server 200 may also include a memory 206. The memory 206 may be configured to store data for use by the processing server 200 in performing the functions discussed herein, such as public and private keys, symmetric keys, and so forth. The memory 206 may be configured to store data using suitable data formatting methods and schemes, and may be any suitable type of memory such as read-only memory, random access memory, or the like. Memory 206 may include program code such as encryption keys and algorithms, communication protocols and standards, data formatting standards and protocols, modules of processing devices and applications, and other data that processing server 200 may be adapted to use in performing the functions disclosed herein, as will be apparent to those having skill in the relevant arts. In some embodiments, the memory 206 may be comprised of or may include a relational database that is stored, identified, modified, updated using a structured query language, accesses a structured data set stored therein, and the like.
The processing server 200 may include a query module 208. The query module 208 may be configured to perform queries on the database to identify and perform other actions related to the information. The query module 208 may receive one or more data values or query strings and may perform the query strings on the indicated database (e.g., the memory 206) based thereon to identify, modify, insert, update, etc., the information stored therein. The query module 208 may output the identified information to the appropriate engine or module of the processing server 200 as needed. The query module 208 may execute a query, e.g., on the memory 206, to identify a hashing algorithm and, if applicable, a salt value for a hash set identifier or hash identifier as part of the processing of the first anonymization system 102 and the second anonymization system 104.
The processing server 200 may also include a hashing module 210. The hash module 210 may be configured to hash data for the processing server 200 to generate a hash value. The hash module 210 may receive data to be hashed as input, may generate a hash value by applying one or more hash algorithms thereto, and may output the resulting hash value to another module or engine of the processing server 200. In some embodiments, the input may include one or more hashing algorithms or indications of hashing algorithms. In other embodiments, the hashing module 210 may be configured to identify a hashing algorithm to be used (e.g., in the memory 206 of the processing server 200). For example, the hashing module 218 may be configured to generate a hashed identifier by applying one or more hashing algorithms to the set identifier (e.g., and salt, if applicable), and to generate a double hashed identifier by applying one or more hashing algorithms to the hashed identifier (e.g., and salt, if applicable).
The processing server 200 may also include a generation module 212. The generation module 212 may be configured to generate data for use in the processing server 200 to perform the functions discussed herein. The generation module 212 may receive instructions as input, may generate data as indicated, and may output the generated data to another module or engine of the processing server 200. In some cases, the generation module 212 may be configured to use rules, algorithms, criteria, or other data in generating data according to a request. In this case, the data may be included in the input provided to the generation module 212 or identified by the generation module 212, for example by instructing the query module 208 to query the memory 206 for the data. The generation module 212 may be configured to, for example, generate a data message including a hashed identifier, a double hashed identifier, and/or a data value for electronic transmission to other computing systems and entities in the system 100.
The processing server 200 may also include a sending device 214. The transmitting device 214 may be configured to transmit data over one or more networks via one or more network protocols. In some cases, the sending device 214 may be configured to send data to the first anonymization system 102, the second anonymization system 104, the hosting entity 106, the requesting entity 108, the payment network 110, the other processing servers 200, and other entities via one or more communication methods, such as a cellular communication network, radio frequency, internet, local area network, and so forth. In some embodiments, the sending device 214 may include multiple devices, e.g., different sending devices for sending data over different networks, such as a first sending device for sending data over a local area network and a second sending device for sending data over the internet. The sending device 214 may electronically send a data signal superimposed with data that may be interpreted by the receiving computing device. In some cases, the transmitting device 214 may include one or more modules for superimposing, encoding, or otherwise formatting data into a data signal suitable for transmission.
The transmitting device 214 may be configured to electronically transmit a data signal, which may be superimposed or otherwise encoded with a data value and a corresponding hash identifier, to other processing servers 200 in separate and distinct computing systems. The sending device 214 may also be configured to electronically send a data signal to the hosting entity 108, for example, may be overlaid or otherwise encoded with the double-hashed identifier and the corresponding data value, or may be overlaid or otherwise encoded with a data request including the double-hashed identifier, which may be sent through an API hosted by the hosting entity 106 or otherwise associated with the hosting entity 106. The transmitting device 214 may also be further configured to electronically transmit a data signal to the requesting entity 108, such as may be transmitted in response to a received data request, superimposed with or otherwise encoded with the dual hash identifier and the corresponding data value.
Process for data double anonymization and hosting
Fig. 3 illustrates a process of double anonymizing transaction data by using separate and distinct first anonymizing system 102 and second anonymizing system 104, and thus hosting by separate and distinct hosting entities 106.
In step 302, the receiving device 202 of the first anonymization system 102 receives a plurality of transaction data sets from the payment network 110, wherein each transaction data set includes data related to an electronic payment transaction including at least a primary account number as a set identifier and other transaction data. In step 304, the hashing module 210 of the first anonymization system 102 performs a first anonymization process. The first anonymization process includes de-identifying information that identifies individuals and hashing a set identifier. In the above example, hashing the set identifier may include de-identification of information that may identify the person. The hashing module 210 may apply one or more hashing algorithms to the set identifier (e.g., and salt, if applicable) for each transaction data set. In some cases, the first anonymization process may include discarding the set identifier.
In step 306, the transmitting device 214 of the first anonymization system 102 may electronically transmit to the second anonymization system 104 an anonymized transaction data set that does not include personally identifiable information and that has a hashed identifier instead of the set identifier. In some embodiments, step 306 may include discarding the transaction data set or the transaction data included therein. In step 308, the receiving device 202 of the second anonymization system 104 receives the anonymized transaction data set from the first anonymization system 102. In step 310, the hashing module 210 of the second anonymization system 104 may perform a second anonymization process. The second anonymization process includes replacing the hashed identifiers with double-hashed identifiers that are generated by applying, for each anonymous transaction data set, one or more hashing algorithms to the hashed identifiers (e.g., and salt, if applicable) included in the anonymous transaction data set.
In step 312, the sending device 214 of the second anonymization system 104 may electronically send each dual-anonymized transaction data set to the hosting entity 106 via a suitable communication network and method. In some embodiments, step 312 may include discarding the anonymized transaction data set or the transaction data included therein. In step 314, the hosting entity 106 may receive a double anonymized transaction data set. In step 316, the hosting entity 106 may host the double anonymized data, where the transaction data is stored with the corresponding double hashed identifier.
Use procedure of double anonymized data
Fig. 4 illustrates a process of using dual anonymized data in the system 100, where the dual anonymized data is hosted by the hosting entity 106 and accessed via the first anonymization system 102 using the second anonymization system 104.
In step 402, the receiving device 202 of the first anonymization system 102 may receive a data request from the requesting entity 108. The data request may include at least one or more set identifiers, hash identifiers, and/or data values that are requested. For example, the data request may include a set identifier or a hash identifier for which the requested stored data value is intended, or the data request may include one or more data values for which a corresponding double hash identifier is requested. In step 404, the sending device 214 of the first anonymization system 102 may request the relevant data from the second anonymization system 104. Where the data request includes a set identifier, the hashing module 210 of the first anonymization system 102 may first hash the set identifier and replace the set identifier with the hashed identifier.
In step 406, the receiving device 202 of the second anonymization system 104 may receive the request from the first anonymization system 102 along with the hash identifier and/or the requested data value. At step 408, the sending device 214 of the second anonymization system 104 may initiate a call through the API of the hosting entity 106 to request a data value corresponding to the double hash identifier (e.g., generated by the hash module 210, if applicable), or to request a double hash identifier of the specified data value provided in the data request. In step 410, the managed entity 106 may receive an API call requesting the double hash identifier or data value.
In step 412, the hosting entity 106 may identify the requested data, such as identifying a data value for the provided double hash identifier or identifying a double hash identifier corresponding to the data value requested via the API call. In some embodiments, the control layer of the hosting entity 106 may filter, aggregate, or otherwise process the data values as part of identifying them, e.g., based on rules applicable to the requested data values, corresponding double anonymized data sets, data sources, requesting entities, and so forth. At step 414, the hosting entity 106 may send the identified data (e.g., aggregated, filtered, or otherwise processed) to the second anonymization system 104. In step 416, the receiving device 202 of the second anonymization system 104 may receive the identified data. In step 418, the sending device 214 of the second anonymization system 104 may transmit the identified data to the first anonymization system 102 for receipt by its receiving device 202 in step 420. In step 422, the sending device 214 of the first anonymization system 102 may send the identified data to the requesting entity 108 in response to the received data request. In some embodiments, step 422 may be performed by the second anonymization system 104, and steps 418 and 420 may not be performed.
Exemplary method for data double anonymization
FIG. 5 illustrates a method 500 for double anonymization of data using separate and distinct computing systems, each performing a separate anonymization process on the data.
In step 502, a plurality of first data sets may be received by a receiving device (e.g., receiving device 202) of a first computing system (e.g., first anonymization system 102), wherein each first data set includes at least a set identifier and contains information that may identify an individual. In step 504, each of the first data sets may be anonymized by the first computing system, wherein anonymizing includes at least replacing the set identifiers included in each first data set with hash identifiers generated by applying one or more hashing algorithms to at least the corresponding set identifiers and de-identifying the personally identifiable information.
In step 506, the plurality of anonymized first data sets may be electronically transmitted by a transmitting device (e.g., transmitting device 214) of the first computing system to a receiving device of a second computing system (e.g., second anonymization system 104), wherein the second computing system is distinct and separate from the first computing system and controlled by a separate entity. In step 508, each anonymized first data set may be anonymized by the second computing system, wherein anonymizing includes at least replacing the hashed identifiers with double-hashed identifiers, the double-hashed identifiers being generated by one or more hashing algorithms applied to at least the corresponding hashed identifiers. In step 510, the multiple doubly anonymized first data sets may be electronically transmitted by the transmitting device of the second computing system to a different and independent third-party system (e.g., the hosting entity 106) for storage.
In one embodiment, the third party system may be controlled by another separate entity different from the first computing system and the second computing system. In some embodiments, the second computing system may not receive or possess information that may identify individuals. In one embodiment, the first computing system may be configured to discard the information identifiable as personal after de-recognition. In some embodiments, the one or more hashing algorithms used by the first computing system and the second computing system may include at least one different hashing algorithm. In one embodiment, each first data set may include data relating to an electronic payment transaction, the set identifier may be a primary account number used in the related electronic payment transaction, and the information identifiable of the individual may include the primary account number.
In some embodiments, the hash identifier may be generated by applying one or more hash algorithms to the corresponding set identifier and first salt value. In another embodiment, a double hash identifier may be generated by applying one or more hashing algorithms to the corresponding hash identifier and second threshold value. In another embodiment, the one or more hashing algorithms used by the first computing system and the second computing system may be equivalent. In yet another embodiment, the first computing system may not receive or possess the second salt value, and the second computing system may not receive or possess the first salt value.
Computer systemFramework
Fig. 6 illustrates a computer system 600 in which embodiments of the disclosure, or portions thereof, may be implemented as computer-readable code. For example, the processing server 200 and/or computing system of fig. 2, including the first anonymization system 102 and/or the second anonymization system 104, can be implemented in the computer system 600 using hardware, software, firmware, non-transitory computer-readable media having instructions stored thereon, or a combination thereof, and can be implemented in one or more computer systems or other processing systems. Hardware, software, or any combination thereof may be implemented as modules and components for implementing the methods of fig. 3-5.
If programmable logic is used, such logic can be implemented on a commercially available processing platform that is configured from executable software code to become a special purpose computer or special purpose device (e.g., a programmable logic array, an application specific integrated circuit, etc.). Those skilled in the art will appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multi-core multiprocessor systems, minicomputers, mainframe computers, computers with distributed functions, linked or clustered computers, and general purpose computers or minicomputers, which can be embedded in virtually any device. For example, at least one processor device and memory may be used to implement the above-described embodiments.
The processor units or devices discussed herein may be a single processor, multiple processors, or a combination thereof. A processor device may have one or more processor "cores. The terms "computer program medium," "non-transitory computer-readable medium," and "computer usable medium" discussed herein are generally used to refer to tangible media, such as removable storage unit 618, removable storage unit 622, and a hard disk installed in hard disk drive 612.
Various embodiments of the present disclosure are described in terms of this example computer system 600. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the disclosure using other computer systems and/or computer architectures. Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally or remotely for access by single or multi-processor machines. Additionally, in some embodiments, the order of the operations may be rearranged without departing from the spirit of the disclosed subject matter.
Processor device 604 may be a special purpose or general-purpose processor device specially configured to perform the functions discussed herein. The processor device 604 may be connected to a communication infrastructure 606, such as a bus, message queue, network, multi-core messaging scheme, and so forth. The network may be any network suitable for performing the functions disclosed herein and may include a Local Area Network (LAN), a Wide Area Network (WAN), a wireless network (e.g., WiFi), a mobile communications network, a satellite network, the internet, fiber optics, coaxial cable, infrared, Radio Frequency (RF), or any combination thereof. Other suitable network types and configurations will be apparent to those skilled in the relevant arts. Computer system 600 may also include a main memory 608 (e.g., random access memory, read only memory, etc.), and may also include a secondary memory 610. Secondary memory 610 may include a hard disk drive 612 and a removable storage drive 614, such as a floppy disk drive, a magnetic tape drive, an optical disk drive, a flash memory, etc.
Removable storage drive 614 may read from and/or write to removable storage unit 618 in a well known manner. Removable storage unit 618 may include a removable storage medium that is readable and writable by removable storage drive 614. For example, if the removable storage drive 614 is a floppy disk drive or a universal serial bus port, the removable storage unit 618 may be a floppy disk or a portable flash drive, respectively. In one embodiment, the removable storage unit 618 may be a non-transitory computer-readable recording medium.
In some embodiments, secondary memory 610 may include alternative means for allowing computer programs or other instructions to be loaded into computer system 600, such as a removable storage unit 622 and an interface 620. Examples of such means may include a program cartridge and cartridge interface (e.g., as seen in video game systems), a removable memory chip (e.g., EEPROM, PROM, etc.) and associated socket, and other removable storage units 622 and interfaces 620, as will be apparent to those skilled in the relevant art.
Data stored in computer system 600 (e.g., in main memory 608 and/or secondary memory 610) may be stored on any type of suitable computer-readable medium, such as optical storage (e.g., compact disc, digital versatile disc, blu-ray disc, etc.) or tape storage (e.g., hard disk drive). The data may be configured in any type of suitable database configuration, such as a relational database, Structured Query Language (SQL) database, distributed database, object database, and the like. Suitable configurations and storage types will be apparent to those skilled in the relevant art.
Computer system 600 may also include a communications interface 624. Communication interface 624 may be configured to allow software and data to be transferred between computer system 600 and external devices. Exemplary communication interfaces 624 can include a modem, a network interface (e.g., an ethernet card), a communications port, a PCMCIA slot and card, and the like. Software and data transferred via communications interface 624 may be in the form of signals, which may be electronic, electromagnetic, optical or other signals as will be apparent to those skilled in the relevant art. The signals may propagate via a communication path 626, which communication path 626 may be configured to carry signals and may be implemented using wires, cables, optical fibers, telephone lines, cellular telephone links, radio frequency links, etc.
Computer system 600 may also include a display interface 602. The display interface 602 may be configured to allow data to be transferred between the computer system 600 and an external display 630. Exemplary display interfaces 602 may include High Definition Multimedia Interface (HDMI), Digital Video Interface (DVI), Video Graphics Array (VGA), and the like. Display 630 can be any suitable type of display for displaying data transmitted through display interface 602 of computer system 600, including a Cathode Ray Tube (CRT) display, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, a capacitive touch display, a Thin Film Transistor (TFT) display, and the like.
Computer program medium and computer usable medium may refer to memories, such as main memory 608 and secondary memory 610, which may be memory semiconductors (e.g., DRAMs, etc.). These computer program products may be means for providing software to computer system 600. Computer programs (e.g., computer control logic) may be stored in main memory 608 and/or secondary memory 610. Computer programs may also be received via communications interface 624. Such computer programs, when executed, may enable computer system 600 to implement the present methods discussed herein. In particular, the computer programs, when executed, may enable the processor device 604 to implement the methods illustrated in fig. 3-5 as discussed herein. Accordingly, such computer programs may represent controllers of the computer system 600. Where the disclosure is implemented using software, the software may be stored in a computer program product and loaded into computer system 600 using removable storage drive 614, interface 620 and hard drive 612 or communications interface 624.
Processor device 604 may include one or more modules or engines configured to perform the functions of computer system 600. Each module or engine may be implemented using hardware and, in some cases, may also utilize software, e.g., corresponding to program code and/or programs stored in main memory 608 or secondary memory 610. In such a case, the program code may be compiled by the processor device 604 (e.g., via a compilation module or engine) prior to execution by the hardware of the computer system 600. For example, program code may be source code written in a programming language that is translated into a lower level language, such as assembly language or machine code, for execution by processor device 604 and/or any other hardware component of computer system 600. The compilation process may include the use of lexical analysis, preprocessing, parsing, semantic analysis, grammar-directed translation, code generation, code optimization, and any other technique suitable for translating program code into a lower-level language suitable for controlling the computer system 600 to perform the functions disclosed herein. It will be apparent to those skilled in the relevant art that such a process results in computer system 600 being a specially configured computer system 600 that is specifically configured to perform the functions described above.
Among other features, techniques consistent with the present disclosure provide systems and methods for dual anonymization of data. While various exemplary embodiments of the disclosed systems and methods have been described above, it should be understood that they have been presented by way of example only, and not limitation. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosure without departing from the breadth or scope.
Claims (20)
1. A method for dual anonymization of data, comprising:
receiving, by a receiving device of a first computing system, a plurality of first data sets, each first data set including at least a set identifier and including information identifiable as a person;
anonymizing, by the first computing system, each of the first data sets, wherein anonymizing includes at least replacing the set identifier included in each first data set with a hash identifier and de-identifying the personally identifiable information, wherein the hash identifier is generated by applying one or more hash algorithms to at least the corresponding set identifier;
electronically transmitting, by a transmitting device of the first computing system, the plurality of anonymized first data sets to a receiving device of a second computing system, wherein the second computing system is distinct and separate from the first computing system;
anonymizing, by the second computing system, each anonymized first data set, wherein anonymizing includes at least replacing the hash identifier with a double hash identifier, the double hash identifier generated by applying one or more hash algorithms to at least the corresponding hash identifier; and
storing a plurality of doubly anonymized first data in the second computing system or a third separate and distinct computing system.
2. The method of claim 1, wherein,
if the doubly anonymized data set is stored in the third computing system, the third computing system is controlled by an entity separate from the first computing system and the second computing system, and
if the double-anonymized data set is stored in the second computing system, the second computing system is controlled by an entity separate from the first computing system.
3. The method of claim 1, wherein the second computing system does not receive or possess information that identifies individuals.
4. The method of claim 1, wherein the first computing system is configured to discard the identifiable individual's information after the de-identification.
5. The method of claim 1, wherein the one or more hashing algorithms used by the first computing system and the second computing system includes at least one different hashing algorithm.
6. The method of claim 1, wherein the hashed identifiers are generated by applying one or more hashing algorithms to the corresponding set identifiers and first salt values.
7. The method of claim 6, wherein the dual hash identifier is generated by applying one or more hashing algorithms to the corresponding hash identifier and second threshold value.
8. The method of claim 7, wherein the one or more hashing algorithms used by the first computing system and the second computing system are equivalent.
9. The method of claim 7, wherein,
the first computing system does not receive or own the second salt value, and
the second computing system does not receive or own the first salt value.
10. The method of claim 1, wherein,
each first data set comprises data relating to an electronic payment transaction,
the set identifier is a primary account number used in a related electronic payment transaction, and
the information identifying the individual is composed of the primary account number.
11. A system for dual anonymization of data, comprising:
a first computing system; and
a second computing system, wherein
The first computing system is configured to
Receiving, by a receiving device of the first computing system, a plurality of first data sets, each first data set including at least a set identifier and including information identifiable as a person,
anonymizing each of the first data sets, wherein anonymizing comprises at least replacing the set identifier comprised in each first data set with a hash identifier and de-identifying the personally identifiable information, wherein the hash identifier is generated by applying one or more hash algorithms to at least the respective set identifier, and
electronically transmitting, by a transmitting device of the first computing system, the plurality of anonymized first sets of data to a receiving device of a second computing system, wherein the second computing system is distinct and separate from the first computing system, and
the second computing system is configured to anonymize each anonymized first data set, wherein anonymizing includes at least replacing the hash identifier with a double hash identifier, the double hash identifier being generated by applying one or more hash algorithms to at least the corresponding hash identifier, wherein
The plurality of doubly anonymized first data sets are stored in the second computing system or in a different and independent third computing system.
12. The system of claim 11, wherein,
if the doubly anonymized data set is stored in the third computing system, the third computing system is controlled by an entity separate from the first and second computing systems, and
if the double-anonymized data set is stored in the second computing system, the second computing system is controlled by an entity separate from the first computing system.
13. The system of claim 11, wherein the second computing system does not receive or possess information that identifies individuals.
14. The system of claim 11, wherein the first computing system is configured to discard the identifiable individual's information after the de-identification.
15. The system of claim 11, wherein the one or more hashing algorithms used by the first computing system and the second computing system includes at least one different hashing algorithm.
16. The system of claim 11, wherein the hashed identifiers are generated by applying one or more hashing algorithms to the corresponding set identifiers and first salt values.
17. The system of claim 16, wherein the dual hash identifier is generated by applying one or more hashing algorithms to the corresponding hash identifier and second threshold value.
18. The system of claim 17, wherein the one or more hashing algorithms used by the first computing system and the second computing system are equivalent.
19. The system of claim 17, wherein,
the first computing system does not receive or own the second salt value, and
the second computing system does not receive or own the first salt value.
20. The system of claim 11, wherein,
each first data set comprises data relating to an electronic payment transaction,
the set identifier is a primary account number used in a related electronic payment transaction, an
The information identifying the individual is composed of the primary account number.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US62/397,828 | 2016-09-21 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK40009099A true HK40009099A (en) | 2020-06-19 |
| HK40009099B HK40009099B (en) | 2023-06-16 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109964228B (en) | Method and system for double anonymization of data | |
| CN110533418B (en) | Method and system for processing electronic payment transactions | |
| US11797995B2 (en) | Method and system for risk scoring anonymized transactions | |
| CN109791591B (en) | Method and system for identity and credential protection and verification via blockchain | |
| US11257130B2 (en) | Method and system for review verification and trustworthiness scoring via blockchain | |
| US20180349896A1 (en) | Method and system for anonymization of electronic transactions via blockchain | |
| KR102858094B1 (en) | Methods and Systems for Regulating Blockchain Transactions | |
| WO2018118247A1 (en) | Method and system for anonymous directed blockchain transaction | |
| CA2983412A1 (en) | Method and system for dynamic de-identification of data sets | |
| US20230385303A1 (en) | Method and system for maintaining privacy and compliance in the use of account reissuance data | |
| US20240086906A1 (en) | Method and system for providing token identity | |
| US11270026B2 (en) | Method and system for securing personally identifiable information | |
| US10074141B2 (en) | Method and system for linking forensic data with purchase behavior | |
| HK40009099A (en) | Method and system for double anonymization of data | |
| HK40009099B (en) | Method and system for double anonymization of data | |
| US20250104058A1 (en) | Method and system of increasing diversity in blockchain systems | |
| KR20250086597A (en) | Methods and systems for sanctioning blockchains | |
| BR112019005438B1 (en) | METHOD AND SYSTEM OF DOUBLE DATA ANONYMIZATION | |
| HK40005141B (en) | Method and system for identity and credential protection and verification via blockchain | |
| HK40005141A (en) | Method and system for identity and credential protection and verification via blockchain |