[go: up one dir, main page]

CN116092098B - Model training method and terminal equipment - Google Patents

Model training method and terminal equipment Download PDF

Info

Publication number
CN116092098B
CN116092098B CN202210978950.6A CN202210978950A CN116092098B CN 116092098 B CN116092098 B CN 116092098B CN 202210978950 A CN202210978950 A CN 202210978950A CN 116092098 B CN116092098 B CN 116092098B
Authority
CN
China
Prior art keywords
text
user
application
information
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210978950.6A
Other languages
Chinese (zh)
Other versions
CN116092098A (en
Inventor
薛姣
张云柯
宋新超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210978950.6A priority Critical patent/CN116092098B/en
Publication of CN116092098A publication Critical patent/CN116092098A/en
Application granted granted Critical
Publication of CN116092098B publication Critical patent/CN116092098B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a model training method and terminal equipment, wherein the model training method comprises the steps of detecting the input operation of a preset input box in preset application of the terminal equipment by a user; and taking the input information as a training sample to train an identification model, wherein the identification model is used for identifying the text type of the text information, and the service corresponding to the text type is a target service provided for a user by the terminal equipment based on the input information. According to the method provided by the application, the information of the user is acquired in a targeted manner by using the information pasted or input by the preset input box of the user in the preset application as the training sample, and the information of the user is used as the training sample to train the model, so that the usability of the training sample is guaranteed, the identification accuracy of the model can be improved, the suitability of the model and the user is better, and further the service can be accurately provided for the user.

Description

Model training method and terminal equipment
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a model training method and a terminal device.
Background
With the development of terminal technology, the terminal equipment gradually changes from a 'people find service' mode to a 'service find people' mode. For example, when the terminal device detects the operation of the user, the operation intention of the user can be sensed, and the service option is actively pushed in the form of a service card or a capsule, so that the user can perform the next operation without searching for the service by the user.
After detecting the operation of copying the text by the user, the terminal equipment responds to the operation to sense the operation intention of the user, pushes service options for the user in the form of service cards or capsules, does not need the user to open other application search services, and can conveniently and efficiently provide services for the user. In order to accurately sense the intention of the user to copy the text, the terminal equipment can identify the user to copy the text through the model, determine the text type and further provide services for the user. For example, the user copies the text "the coastal large road in the south mountain area", the terminal device detects an operation of copying the text by the user, and in response to the operation, the text "the coastal large road in the south mountain area" is identified through the model, the text type is determined to be an address, and services such as map opening, copying, sharing and the like can be provided for the user in the form of a service card.
At present, one method for obtaining a model for identifying a text type is to obtain a text marked with the text type from the internet, train a neural network model by using the text and the marked type of the text, and obtain the text type identification model.
In this implementation manner, a large amount of text content marked with text types is generally required to be acquired to meet the demands of different users, but the matching between the text content acquired from the internet and the text content copied by the user in the terminal device needs to use the target service is poor, that is, the text content acquired from the internet and the content copied by the user have a large difference, which results in inaccurate recognition results when text type recognition is performed on the text copied by the user on the terminal device by using the model trained based on the method, and further reduces the accuracy of the service provided by the terminal device for the user based on the text type recognized by the model.
Disclosure of Invention
The application provides a model training method and terminal equipment, which can improve the model identification accuracy and further provide services for users accurately.
According to the first aspect, a model training method is provided, and the model training method comprises the steps of detecting input operation of a preset input box in preset application of a terminal device by a user, responding to the input operation, obtaining input information, taking the input information as a training sample to train a recognition model, wherein the recognition model is used for recognizing text types of text information, and service corresponding to the text types is target service provided for the user by the terminal device based on the input information.
The preset application may also be referred to as a preset application, a specific application, or a specific application, to which the present application is not limited. The preset application program includes a plurality of input boxes, the plurality of input boxes include preset input boxes, and the terminal device may detect only input operations of the preset input boxes. The application does not limit the specific number of the preset application programs and the preset input boxes. The input information may be information pasted by the user or manually input by the user, which is not limited in the present application. The preset input box may allow one or more types of text to be input, which the present application is not limited to.
The preset applications may include a car-taking application, a map application, a shopping application, a phone application, a conference application, an information application, a search application, a news application, a video application, an express application, a ticket-purchasing application, a ticket-checking application, and the like.
The terminal device can collect address class data as training samples through the taxi taking application program, the map class application program and the shopping class application program. The terminal device can collect the number class data as training samples through the telephone class application program, the conference class application program and the information class application program. The terminal device may collect link class data as training samples through a search class application, a news class application, and a video class application. The terminal equipment can collect the express bill number through the express application program as a training sample. The terminal device can collect the shift data of the ticket through the ticket purchasing application program and the ticket checking application program as a training sample.
Optionally, if the preset input box only allows inputting one type of text, the terminal device may use the input information and the text type corresponding to the preset input box as the training sample, so that no additional annotation is required for the input information, which is beneficial to improving the speed of model training.
According to the model training method provided by the application, the information of the user is acquired in a targeted manner by using the information pasted or input by the preset input box of the user in the preset application as the training sample, and the information of the user is used as the training sample to train the model, so that the usability of the training sample is guaranteed, the identification accuracy of the model can be improved, the suitability of the model and the user is better, and further, the service can be accurately provided for the user.
With reference to the first aspect, in some implementations of the first aspect, the training the recognition model with the input information as a training sample includes sending the input information to a server, and the server trains the recognition model based on the input information marked with the text type to obtain a trained recognition model.
The terminal device may send input information to the server, which performs model training based on the input information. The text type of the input information can be manually annotated by a user or can be automatically annotated based on the existing annotating method, and the application is not limited to the method.
The server can take the input information as the input of the recognition model, take the text type marked by the input information as the output of the recognition model, train the parameters of the recognition model, and obtain the trained recognition model.
Optionally, if the terminal device sends the input information and the text type of the input information (i.e. the text type allowed to be input by the preset input box) to the server, the server may train the recognition model based on the input information and the text type of the input information, so as to obtain a trained recognition model.
According to the model training method provided by the application, the terminal equipment sends the input information to the server, the server carries out model training based on the input information, the training entity identification model can be completed by the server, the terminal equipment does not need to train the model, the computing capacity of the terminal equipment can be saved, and the power consumption of the terminal equipment is reduced.
With reference to the first aspect, in some implementations of the first aspect, the sending the input information to the server includes encrypting the input information to obtain encrypted input information, and sending the encrypted input information to the server.
Before sending the input information to the server, the terminal device may encrypt the input information in a differential privacy noise adding manner, obtain encrypted input information, and send the encrypted input information to the server.
According to the model training method provided by the application, the encrypted input information is sent to the server, so that the input information can be prevented from being leaked, the privacy of a user can be prevented from being leaked, and the purpose of safe transmission is achieved.
With reference to the first aspect, in some implementations of the first aspect, the method further includes sending a first request message to the server, the first request message being for requesting a trained recognition model, and receiving the trained recognition model from the server.
The terminal device may request the trained recognition model from the server to update the recognition model before training. The terminal device may send the first request message to the server when there is an application requirement after transmitting the input information to the server, or may periodically acquire the identification model, which is not limited by the present application.
Optionally, the terminal device may receive the trained recognition model actively issued by the server, that is, the terminal device does not need to send the first request message, which may save signaling overhead.
According to the model training method provided by the application, when the application requirement exists, the first request message is sent to the server to acquire the trained recognition model, so that the text type recognition is carried out, and the initiative is stronger.
With reference to the first aspect, in some implementation manners of the first aspect, the method further includes detecting a copy operation of the first text by the user, inputting the first text into the trained recognition model in response to the copy operation of the first text by the user, determining a text type of the first text according to output information of the trained recognition model, and determining a target service of the first text according to the text type of the first text.
The terminal device may perform text type recognition based on the trained recognition model. The triggering condition of the recognition model is that the terminal equipment detects the copying operation of the text by the user.
If the terminal equipment detects the copy operation of the user on the first text, the terminal equipment can input the first text into the trained recognition model in response to the operation, output information of the trained recognition model is the text type of the first text, and target service can be provided for the user according to the text type of the first text. The target service may be displayed in the form of a capsule or service card, as the application is not limited in this regard. The target service may be understood as a service in a service recommendation list.
It may be appreciated that there is a correspondence between the text type and the service, and the terminal device may determine the service based on the text type.
According to the model training method, the first text can be identified based on the trained identification model in response to the operation of copying the first text by the user, the identification of the identification model is accurately improved, and the method is beneficial to accurately providing services for the user.
With reference to the first aspect, in some implementation manners of the first aspect, the method further includes obtaining a second text in the first picture, where the second text is obtained in response to a triggering operation of the text recognition icon by the user, inputting the second text into the trained recognition model, determining a text type of the second text according to output information of the trained recognition model, and determining a target service of the second text according to the text type of the second text.
The terminal equipment responds to the triggering operation of the user on the character recognition icon, can acquire a second text in the first picture, can trigger the recognition model to recognize the second text at the moment, and can indicate that one or more text types exist in the second text if the recognition is successful, and the terminal equipment can provide target service for the user according to the recognized text types.
Note that the first text and the second text may be the same or different, and the present application is not limited thereto.
The model training method provided by the application can identify the text in the picture, identify the text through the trained identification model, provide service for the user according to the identified text type, improve the identification accuracy of the identification model and be beneficial to accurately providing service for the user.
With reference to the first aspect, in some implementations of the first aspect, the trained recognition model includes a standard address rule resolution model and a non-standard address resolution model, inputting the input information to the trained recognition model includes inputting the input information to the standard address rule resolution model, inputting the input information to the non-standard address resolution model if the input information does not conform to the rule of the standard address rule resolution model, and determining the text type of the input information according to the output information of the trained recognition model includes determining the text type of the input information according to the output information of the non-standard address resolution model.
The standard address rule parsing model is used to identify text of a standard address type, for example, a coastal large road in a southbound region. The non-standard address resolution model is used to identify text of a non-standard address type. The terminal device may input the acquired sample data set input information to the standard address rule analysis model first, and determine whether the rule of the standard address rule analysis model is met.
If the input information does not accord with the rule of the standard address rule analysis model, the input information is input into the non-standard address analysis model, and the text type of the input information is determined by the output of the non-standard address analysis model.
According to the model training method provided by the application, the standard address rule analysis model and the non-standard address analysis model are based on the identification addresses, whether the input information is the address can be comprehensively judged based on the output information of the standard address rule analysis model and the non-standard address analysis model, and the probability of misjudgment can be reduced.
With reference to the first aspect, in some implementations of the first aspect, the method further includes determining a text type of the input information according to the output information of the trained recognition model if the input information meets the rule of the standard address rule parsing model, including determining the text type of the input information according to the output information of the standard address rule parsing model.
According to the model training method provided by the application, the input information accords with the rule of the standard address rule analysis model, the judgment by the non-standard address analysis model is not needed, and the recognition efficiency can be improved.
With reference to the first aspect, in some implementations of the first aspect, the inputting the input information into the trained recognition model includes judging whether the input information has error information if the text type of the input information is at least one of an address, a web address, or a mailbox, correcting the input information to obtain corrected input information if the error information has error information, and outputting output information of the trained recognition model, where the output information includes the text type of the input information and the corrected input information.
For example, if the input information of the user is www.baidui.com, the terminal device determines that the input information has error information, and may correct the error of the input information to obtain corrected input information www.baidu.com.
The model training method provided by the application can correct the text of the address, the website or the mailbox type and output the corrected information, is favorable for reminding a user of incorrect input and can improve the user experience.
With reference to the first aspect, in some implementation manners of the first aspect, the method further includes updating a dictionary with the input information as a training sample, to obtain an updated dictionary, where the dictionary includes information input by a user in a history.
The dictionary may also be referred to as a dictionary, as the application is not limited in this regard. The dictionary includes information input by the user history, and after the terminal device obtains the input information, the input information can be added into the dictionary, that is, the dictionary is updated, so as to obtain an updated dictionary.
The model training method provided by the application can store the information input by the user in a dictionary form so as to facilitate the subsequent use.
With reference to the first aspect, in some implementation manners of the first aspect, the method further includes sending the updated dictionary to the server under a preset condition, where the preset condition is used to indicate that the terminal device is idle.
The terminal device may send the updated dictionary to the server when idle (preset condition).
According to the model training method provided by the application, when idle, the updated dictionary can be sent to the server for storage, so that the storage space of the terminal equipment can be saved, and the burden of the terminal equipment can be not increased.
With reference to the first aspect, in some implementations of the first aspect, the updated dictionary is a first dictionary, and sending the updated dictionary to the server includes encrypting the first dictionary to obtain an encrypted first dictionary, and sending the encrypted first dictionary to the server.
According to the model training method provided by the application, the encrypted first dictionary is sent to the server, so that the leakage of the first dictionary can be prevented, the privacy of a user can be prevented from being leaked, and the purpose of safe transmission is achieved.
With reference to the first aspect, in some implementations of the first aspect, the method further includes sending a second request message to the server, where the second request message is used to request an updated dictionary, and receiving the updated dictionary from the server, where the updated dictionary is determined based on the dictionaries of the plurality of terminal devices.
The server may integrate the dictionaries transmitted by the plurality of terminal devices and transmit the dictionaries to the terminal devices based on the second request message.
According to the model training method provided by the application, the terminal equipment can acquire the historical input information of a plurality of users, is more beneficial to matching the information input by the users, and can provide services for the users more conveniently.
With reference to the first aspect, in some implementation manners of the first aspect, the method further includes detecting a copy operation of the third text by the user, and in response to the copy operation of the third text by the user, determining whether the updated dictionary includes the third text, and if the updated dictionary includes the third text, determining a target service of the third text.
The third text may be the same as or different from the first text, and the present application is not limited thereto.
If the updated dictionary includes a third text and the text type of the information in the updated dictionary is known, the terminal device may determine a target service for the third text. The text type of the information in the updated dictionary can be marked by a server, can be marked manually, and can be identified by an identification model, and the application is not limited to the method.
According to the model training method provided by the application, if the text copied by the user is the text input by the user before, the service recommended by the history can be determined as the target service.
With reference to the first aspect, in some implementations of the first aspect, the method further includes obtaining a fourth text in the second picture, where the fourth text is obtained in response to a triggering operation of the user on the word recognition icon, determining whether the updated dictionary includes the fourth text, and if the updated dictionary includes the fourth text, determining a target service of the fourth text.
The fourth text may be the same as or different from the third text, and the present application is not limited thereto.
The terminal equipment responds to the triggering operation of the user on the text recognition icon, a fourth text in the second picture can be obtained, at the moment, the recognition model can be triggered to recognize the fourth text, if the recognition is successful, the fact that one or more text types exist in the fourth text can be indicated, and the terminal equipment can provide target service for the user according to the recognized text types.
It will be appreciated that not all fourth text is necessarily included in the updated dictionary.
The model training method provided by the application can identify the text in the picture, provide service for the user under the condition that the updated dictionary comprises the text, improve the identification accuracy of the identification model and be favorable for accurately providing service for the user.
With reference to the first aspect, in some implementations of the first aspect, the preset application includes at least one of a taxi class application, a map class application, a shopping class application, a telephone class application, a conference class application, an information class application, a search class application, a news class application, a video class application, an express class application, a ticket purchasing class application, or a ticket checking class application.
With reference to the first aspect, in certain implementations of the first aspect, the text type includes at least one of a website, a flight number, a cell phone number, a seat number, an express mail number, a mailbox, a attraction, a restaurant, a hospital, a office building, a shop, or a bus stop.
In a second aspect, a terminal device is provided, which includes a processing module and an acquisition module. The processing module is used for detecting input operation of a preset input box in preset application of the terminal equipment by a user, the acquisition module is used for responding to the input operation to acquire input information, the processing module is further used for taking the input information as a training sample to train a recognition model, the recognition model is used for recognizing text types of text information, and services corresponding to the text types are target services provided for the user by the terminal equipment based on the input information.
With reference to the second aspect, in some implementations of the second aspect, the terminal device further includes a transceiver module. The receiving and transmitting module is used for sending input information to the server, and the server trains the recognition model based on the input information marked with the text type so as to obtain the trained recognition model.
With reference to the second aspect, in some implementations of the second aspect, the terminal device further includes a transceiver module. The processing module is also used for encrypting the input information to obtain the encrypted input information, and the receiving and transmitting module is used for transmitting the encrypted input information to the server.
With reference to the second aspect, in some implementations of the second aspect, the terminal device further includes a transceiver module. The processing module is further used for sending a first request message to the server, wherein the first request message is used for requesting the trained recognition model, and receiving the trained recognition model from the server.
With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to detect a copy operation of the first text by the user, input the first text to the trained recognition model in response to the copy operation of the first text by the user, determine a text type of the first text according to output information of the trained recognition model, and determine a target service of the first text according to the text type of the first text.
With reference to the second aspect, in some implementations of the second aspect, the above-mentioned obtaining module is further configured to obtain a second text in the first picture, where the second text is obtained in response to a triggering operation of the user on the word recognition icon, and the processing module is further configured to input the second text into the trained recognition model, determine a text type of the second text according to output information of the trained recognition model, and determine a target service of the second text according to the text type of the second text.
With reference to the second aspect, in some implementations of the second aspect, the trained recognition model includes a standard address rule analysis model and a non-standard address analysis model, and the processing module is further configured to input the input information to the standard address rule analysis model, input the input information to the non-standard address analysis model if the input information does not conform to a rule of the standard address rule analysis model, and determine a text type of the input information according to output information of the non-standard address analysis model.
With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to determine a text type of the input information according to output information of the standard address rule parsing model if the input information meets a rule of the standard address rule parsing model.
With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to determine whether the input information has error information if the text type of the input information is at least one of an address, a web address, or a mailbox, correct the input information to obtain corrected input information if the error information has error information, and output information of the trained recognition model, where the output information includes the text type of the input information and the corrected input information.
With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to update a dictionary using the input information as a training sample, to obtain an updated dictionary, where the dictionary includes information input by a user history.
With reference to the second aspect, in certain implementations of the second aspect, the method further includes:
And under the preset condition, the updated dictionary is sent to the server, and the preset condition is used for indicating that the terminal equipment is idle.
With reference to the second aspect, in some implementations of the second aspect, the updated dictionary is a first dictionary, and the terminal device further includes a transceiver module. The processing module is also used for encrypting the first dictionary to obtain an encrypted first dictionary, and the receiving-transmitting module is used for transmitting the encrypted first dictionary to the server.
With reference to the second aspect, in some implementations of the second aspect, the terminal device further includes a transceiver module. The transceiver module is used for sending a second request message to the server, wherein the second request message is used for requesting an updated dictionary, and receiving the updated dictionary from the server, and the updated dictionary is determined based on the dictionaries of the plurality of terminal devices.
With reference to the second aspect, in some implementations of the second aspect, the processing module is further configured to detect a copy operation of the third text by the user, determine, in response to the copy operation of the third text by the user, whether the updated dictionary includes the third text, and determine a target service of the third text if the updated dictionary includes the third text.
With reference to the second aspect, in some implementations of the second aspect, the acquiring module is further configured to acquire a fourth text in the second picture, where the fourth text is acquired in response to a triggering operation of the user on the word recognition icon, and the processing module is further configured to determine whether the updated dictionary includes the fourth text, and if the updated dictionary includes the fourth text, determine a target service of the fourth text.
With reference to the second aspect, in some implementations of the second aspect, the preset application includes at least one of a vehicle application, a map application, a shopping application, a phone application, a conference application, an information application, a search application, a news application, a video application, an express application, a ticket purchase application, or a ticket check application.
With reference to the second aspect, in some implementations of the second aspect, the text type includes at least one of a website, a flight number, a cell phone number, a seat number, an express mail number, a mailbox, a attraction, a restaurant, a hospital, a office building, a shop, or a bus stop.
In a third aspect, the present application provides a terminal device comprising a processor coupled to a memory, operable to execute instructions in the memory to implement a method according to any one of the possible implementations of the first aspect. Optionally, the terminal device further comprises a memory. Optionally, the terminal device further comprises a transceiver, and the processor is coupled to the transceiver.
In a fourth aspect, the present application provides a processor comprising an input circuit, an output circuit, and a processing circuit. The processing circuitry is configured to receive signals via the input circuitry and to transmit signals via the output circuitry such that the processor performs the method of any one of the possible implementations of the first aspect described above.
In a specific implementation process, the processor may be a chip, the input circuit may be an input pin, the output circuit may be an output pin, and the processing circuit may be a transistor, a gate circuit, a trigger, various logic circuits, and the like. The input signal received by the input circuit may be received and input by, for example and without limitation, a receiver, the output signal may be output by, for example and without limitation, a transmitter and transmitted by a transmitter, and the input circuit and the output circuit may be the same circuit, which functions as the input circuit and the output circuit, respectively, at different times. The application is not limited to the specific implementation of the processor and various circuits.
In a fifth aspect, the present application provides a processing apparatus comprising a processor and a memory. The processor is configured to read instructions stored in the memory and to receive signals via the receiver and to transmit signals via the transmitter to perform the method of any one of the possible implementations of the first aspect.
Optionally, the processor is one or more and the memory is one or more.
Alternatively, the memory may be integrated with the processor or the memory may be separate from the processor.
In a specific implementation process, the memory may be a non-transient (non-transitory) memory, for example, a Read Only Memory (ROM), which may be integrated on the same chip as the processor, or may be separately disposed on different chips.
It should be appreciated that the related data interaction process, for example, transmitting the indication information, may be a process of outputting the indication information from the processor, and the receiving the capability information may be a process of receiving the input capability information by the processor. Specifically, the data output by the processing may be output to the transmitter, and the input data received by the processor may be from the receiver. Wherein the transmitter and receiver may be collectively referred to as a transceiver.
The processing means in the fifth aspect may be a chip, and the processor may be implemented by hardware or software, and when implemented by hardware, the processor may be a logic circuit, an integrated circuit, or the like, and when implemented by software, the processor may be a general-purpose processor, and the memory may be integrated in the processor, may be located outside the processor, or may exist independently, by reading software codes stored in the memory.
In a sixth aspect, the present application provides a computer readable storage medium storing a computer program (which may also be referred to as code, or instructions) which, when run on a computer, causes the computer to perform the method of any one of the possible implementations of the first aspect.
In a seventh aspect, the application provides a computer program product comprising a computer program (which may also be referred to as code, or instructions) which, when executed, causes a computer to perform the method of any one of the possible implementations of the first aspect.
Drawings
FIG. 1 is an interface diagram of a "service find" mode;
FIG. 2 is an interface diagram of another "service find" mode;
fig. 3 is a system frame diagram of a terminal device according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a method of providing a service provided by an embodiment of the present application;
FIG. 5 is a schematic flow chart diagram of another method of providing services provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an interface for copying text according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an interface for training sample collection according to an embodiment of the present application;
FIG. 8 is a comparison of the training samples before and after encryption according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
FIG. 10 is a schematic flow chart of a model training mode provided by an embodiment of the application;
FIG. 11 is a block diagram of an entity recognition model according to an embodiment of the present application;
Fig. 12 is a schematic block diagram of a terminal device according to an embodiment of the present application;
Fig. 13 is a schematic block diagram of another terminal device according to an embodiment of the present application.
Detailed Description
The technical scheme of the application will be described below with reference to the accompanying drawings.
With the development of terminal technology, the terminal equipment gradually changes from a 'people find service' mode to a 'service find people' mode. For example, when the terminal device detects the operation of the user, the operation intention of the user can be sensed, and the service option is actively pushed in the form of a service card or a capsule, so that the user can perform the next operation without searching for the service by the user.
At present, after detecting the operation of copying the text by the user, the terminal equipment responds to the operation to perceive the operation intention of the user, pushes service options for the user in the form of service cards or capsules, does not need the user to open other application search services, and can conveniently and efficiently provide services for the user.
The terminal device may be a mobile phone, the text may be an address, the user may copy the address at the chat interface, and the mobile phone detects an operation of copying the address by the user and provides services to the user in the form of a push capsule. FIG. 1 shows an interface diagram of a "service find people" mode. As shown in an interface a in fig. 1, the user copies an address "mountain area coastal road" in a chat interface with fitness privacy, and the mobile phone detects an operation of copying the mountain area coastal road by the user, and in response to the operation, displays an interface b in fig. 1. It should be noted that, the chat content of the user and the exercise private teaching, the operation of selecting an address by the user, and the triggering operation of the copy control and the full-selection control by the user are not important points of the embodiment of the present application, and the embodiment of the present application is not limited in any way.
As shown in the interface b of fig. 1, the mobile phone responds to the user's operation of copying the "mountain and coast road", and in response to this operation, the service may be displayed in the form of a suspension capsule, and the displayed service may be the first item in the service recommendation list, that is, the navigation go option. It will be appreciated that the service recommendation list includes other service options in addition to the navigation go to option. The hover capsule includes a deployment control 101 to facilitate user viewing of other service options. The "service finding person" mode provided by the mobile phone in the embodiment of the application can be a function provided by the YOYO suggestion, so that the YOYO icon 102 can be displayed on the suspension capsule.
The display of the YOYO icon 102 may include an animation effect, and the c interface in fig. 1 may be provided after the mobile phone detects that the animation effect display of the YOYO icon 102 is completed. As shown in the c interface in fig. 1, the handset displays an icon 103 of navigation go-to options at the position of the yoyoyo icon. When the mobile phone detects a triggering operation of the expansion control 101 by a user, a d interface in fig. 1 can be displayed. As shown in the d interface of fig. 1, the handset displays a preview area, address information, a service recommendation list, and a stowage control 104. The preview area is a route map, and comprises a route from the current position of the user to the 'mountain area coastal street'. The address information includes an address, an address picture, a time of driving to reach the address of 20 minutes (min) and a distance of 14 km (kilometer km). The service recommendation list includes navigation go-to options and their corresponding icons, open-on-map options and their corresponding icons, and share and their corresponding icons. When the handset detects that the user clicks the stow control 104, the handset displays the c interface in fig. 1.
In the interface c in fig. 1, if the mobile phone does not detect the operation of the suspension capsule by the user within 5 seconds, the suspension capsule is not displayed, i.e., the suspension capsule disappears.
The terminal device can also recognize text information in the picture, and recommend services to the user according to the text information.
Illustratively, the terminal device may be a mobile phone, the text may be an address, the mobile phone may recognize address information in the photograph, and underline is added under the address information. When the mobile phone detects the trigger operation of the user on the underline, the mobile phone provides service for the user in the form of a service card. FIG. 2 shows an interface diagram of a "service find people" mode. As shown in the interface a of fig. 2, the mobile phone displays a picture of a map in the gallery application, the shooting date of the picture is 2021, 6 and 30 days, and the mobile phone can display a sharing option, a collection option, an editing option, a deletion option and more options for the user to operate the picture. The mobile phone may also display a word recognition icon 201, and when the mobile phone detects a triggering operation of the word recognition icon 201 by a user, the b interface in fig. 2 may be displayed. As shown in the interface b of fig. 2, the mobile phone can circle the characters in the pictures, and the characters can comprise line 8, jingang Australian high speed, beidou large road, south mountain area, park a, 4.6 km from the mountain area, south mountain area coastal large road, parking lot 7, parking lot 6, parking lot 3 and the like. The mobile phone can automatically recognize the address information in the text and add an underline under the address information. When the handset detects a user's trigger operation on the underline, the c interface in fig. 2 may be displayed.
As shown in the interface c of fig. 2, the mobile phone displays a preview area, address information, and a service recommendation list. The preview area is a route map, and comprises a route from the current position of the user to the 'mountain area coastal street'. The address information includes an address, a time for driving to reach the address of 20min, and a distance of 14km. The service recommendation list includes navigation go-to options and their corresponding icons, open-on-map options and their corresponding icons, copy options and their corresponding icons, and share and their corresponding icons.
The "service person finding" mode provided by the mobile phone in the embodiment of the application can be a function provided by YOYO suggestion, so that the icon corresponding to the address information can display YOYO icon 202. The display duration of the YOYO icon 202 may be 1 second, and when the mobile phone detects that the display duration of the YOYO icon 202 arrives, the address picture 203 may be displayed.
As shown in the d interface of fig. 2, when the mobile phone detects a triggering operation of the user on the blank area, the b interface of fig. 2 may be displayed.
In the examples shown in fig. 1 and 2, the text is an address, but the text according to the embodiment of the present application is not limited to this, and may be information such as a mobile phone number, an express list number, a flight, a store name, and a scenic spot.
In order to implement the functions shown in fig. 1 and fig. 2, a specific implementation manner is provided in the embodiment of the present application, and before the manner provided in the embodiment of the present application is described, a system frame diagram provided in the embodiment of the present application is described.
Fig. 3 shows a system frame diagram of a terminal device. As shown in fig. 3, the terminal device includes applications such as a map, an address book, a schedule, a gallery, and a camera. The terminal device further includes a YOYO suggestion, a calculation engine, a perception module, and an artificial intelligence module. The terminal device may implement the functions shown in fig. 1 and 2 through these modules.
The perception module includes an application fence and a clipboard fence. The application fence may be used to detect user operations on the application, such as user triggering operations on a gallery or camera. The clipboard enclosure is used for detecting a user's copy text operation, for example, the interface a in fig. 1, and the clipboard enclosure is used for detecting an operation of the user's copy address "mountain and coast road".
The artificial intelligence module includes an entity recognition model for recognizing a type of an entity. Wherein the entity may be the text referred to above, the type of entity is the type of text. The text types can include mobile phone numbers, identity card numbers, addresses, express bill numbers, websites, flight numbers and the like, and the text types are not limited in the embodiment of the application. The entity recognition model may also be referred to as a recognition model or a model for recognizing text types, which is not limited by the embodiment of the present application.
The computing engine includes a processing module and an intent ordering module. The processing module can receive the operation of copying the text in the clipboard fence in the perception module, call the entity recognition model in the artificial intelligent module in response to the operation, recognize the copied text, determine the type of the text, and provide a service recommendation list according to the type of the text. For example, the interface a in fig. 1 detects the operation of the user copying the address "the coastal street in the south mountain area" by the clipboard fence, sends the operation to the processing module, and the processing module calls the entity recognition model in the artificial intelligent module to recognize the copied text, determines the type of the text as the address, and provides the corresponding service for the user according to the address. The intent ranking module may rank the services in the list of services. For example, the c interface in fig. 2 described above, the ranking of the options in the service recommendation list is determined by the intent ranking module.
The YOYO suggestion can be used to include an interface presentation module and a three-way query jump module. The interface display module may display the service provided by the computing engine in the form of a suspension capsule or a card, for example, the suspension capsule of the interface b in fig. 1 or the card of the interface c in fig. 2, and may also provide the interface display service for map, calendar and address book applications. The three-party query jump module is used for jumping to the interface of the corresponding application when the user triggers the service of the service recommendation list, for example, in the interface c in the above-mentioned figure 2, the three-party query jump module detects the triggering operation of the user on the navigation forward option, and jumps to the interface of the map application.
Based on the system architecture diagram shown in fig. 3, the embodiment of the present application will describe in detail a specific implementation manner for implementing the functions shown in fig. 1 and 2.
Illustratively, FIG. 4 shows a schematic flow chart of a method 400 of providing a service. The method 400 may be performed by a terminal device, such as a cell phone. The system architecture diagram of the terminal device may be as shown in fig. 3, but the embodiment of the present application is not limited thereto. The method 400 may be used to implement the functionality described above and shown in fig. 1.
As shown in fig. 4, the method 400 may include the steps of:
s401, detecting the text copying operation of a user by the perception module through the cutting board fence.
In the interface a shown in fig. 1, the sensing module detects the operation of copying the text by the user through the clipboard fence, wherein the text is "the coastal large road in the south mountain area".
S402, the perception module sends an operation of copying the text to the computing engine by the user, and correspondingly, the computing engine receives the operation.
The perception module may also send instructions to the computing engine that instruct the perception module to detect an operation by which the user copies text, and correspondingly, the computing engine receives the instructions.
S403, the computing engine responds to the operation and sends indication information to the artificial intelligent module, wherein the indication information is used for indicating the artificial intelligent module to identify the text copied by the user, and correspondingly, the artificial intelligent module receives the indication information.
Based on this operation, the computing engine may send the indication information to the artificial intelligence module through the processing module.
S404, the artificial intelligence module can identify the text copied by the user through the entity identification model based on the indication information.
The entity recognition model may be used to identify the type of text that the user is copying.
S405, the artificial intelligence module sends the identification result to the computing engine, and correspondingly, the computing engine receives the identification result.
The recognition result may be used to represent the type of text "mountain coastal street" copied by the user, and the recognition result may be an address.
S406, the computing engine determines the corresponding service according to the identification result.
The computing engine can determine the corresponding service according to the identification result through the processing module, namely, determine the service corresponding to the address. The address-corresponding services may include navigation to, sharing, and opening in a map.
The types of different texts may correspond to different services, and there may be a one-to-one or one-to-many correspondence between the types of texts and the services, and the correspondence may be preset, but the embodiment of the present application is not limited thereto.
S407, the computing engine sorts the services to obtain sorted services.
The computing engine can sort the services through the intention sorting module to obtain sorted services. For example, the computing engine ranks services such as navigation forward, sharing, opening in a map and the like through the intention ranking module, and the ranked services are navigation forward, opening in the map and sharing.
S408, the calculation engine sends the ordered service to the YOYO suggestion, and correspondingly, the YOYO suggestion receives the ordered service.
S409 YOYO suggests displaying the ordered services in the form of suspended capsules.
The terminal device may display the b interface in fig. 1 described above through yoyoyo advice, and display the ordered services in the form of suspended capsules. It can be appreciated that due to the limited area of the suspension capsule, when there are more ordered services, the terminal device may display only the service option with the top order on the suspension capsule, and display the expansion control to facilitate the user to view other services.
By way of example, fig. 5 shows a schematic flow chart of another method 500 of providing a service. The method 500 may be performed by a terminal device, such as a cell phone. The system architecture diagram of the terminal device may be as shown in fig. 3, but the embodiment of the present application is not limited thereto. The method 500 may be used to implement the functionality described above with respect to fig. 2.
As shown in fig. 5, the method 500 may include the steps of:
S501, the artificial intelligence module detects triggering operation of a user on a text recognition option.
The user clicks on the text recognition control icon 201 on the interface a in fig. 2, and the computing engine may detect a trigger operation of the text recognition option by the user. S502, the artificial intelligence module can identify the text of the picture through the entity identification model.
The entity recognition model may be used to recognize the type of text of the picture. The text in interface b in fig. 2 includes line 8, high speed in mikang, north circular lane, south mountain area, a park, 4.6 km from mountain area, south mountain area coastal lane, parking lot No. 7, parking lot No.6, parking lot No. 3, etc. The artificial intelligent recognition module can recognize the 'mountain area coastal large road' as an address through the entity recognition model.
S503, the artificial intelligence module sends the identification result to the gallery application, and correspondingly, the gallery application receives the identification result.
The identification result may be "mountain area coastal large road" as an address.
S504, the gallery application underlies the entity mark in the picture based on the identification result.
As indicated by the interface b in fig. 2 above, the gallery application may underline the "southbound coastal highway" label.
S505, the calculation engine detects the trigger operation of the user on the underline.
S506, the calculation engine determines the corresponding service according to the type of the entity corresponding to the underline.
The entity corresponding to the underline is "the south mountain coastal street", the type of which is an address, and the computing engine can determine the service corresponding to the address. The address-corresponding services may include navigation to, sharing, copying, and opening in a map.
The types of different texts may correspond to different services, and there may be a one-to-one or one-to-many correspondence between the types of texts and the services, and the correspondence may be preset, but the embodiment of the present application is not limited thereto.
S507, the calculation engine sorts the services to obtain sorted services.
The computing engine can sort the services through the intention sorting module to obtain sorted services. For example, the computing engine orders services such as navigation, sharing, opening in a map, copying and the like through the intention ordering module, and the ordered services are navigation, opening in the map, copying and sharing.
S508, the calculation engine sends the sequenced services to the gallery application.
S509, the gallery application assembles the ordered services to obtain service cards.
S510, the gallery application sends the service card to the YOYO suggestion, and correspondingly, the YOYO suggestion receives the service card.
S511, the YOYO suggestion displays the service card.
The terminal device may display the c interface in fig. 2 through yoyoyo suggestion, and display the ordered services in the form of service cards.
Research on the method 400 and the method 500 in the embodiment of the present application shows that to accurately sense the intention of the user, the recognition accuracy of the entity recognition model needs to be improved, so that accurate service can be provided for the user according to the recognition result.
At present, a method for acquiring a model for entity recognition is that a text marked with a text type is acquired from the Internet, and a neural network model is trained by using the text and the marked type of the text, so that the model for entity recognition is obtained.
But the text contents obtained from the internet have insufficient similarity to the text contents that the user needs to copy using the target service in the terminal device. For example, the text copied by the user may be the mountain road in south mountain area, and the text related to the address acquired on the internet may be the mountain road in south mountain street and south mountain community in south mountain area, shenzhen, guangdong, china. This may result in inaccurate recognition results when the text type recognition is performed on the text copied by the user on the terminal device by using the model obtained by training based on the foregoing method, thereby reducing the accuracy of the service provided by the terminal device to the user based on the text type recognized by the model. In view of this, the embodiment of the application provides a model training method and a terminal device, which improve the accuracy of model identification by improving the accuracy of training samples. According to the embodiment of the application, the content pasted or input by the user in the specific application program can be used as a training sample, and the model identification accuracy is improved by combining training rules.
The triggering condition of the entity recognition model is that the terminal equipment detects the operation of copying the text by the user or the terminal equipment detects the triggering operation of the text recognition icon by the user.
In one possible implementation, the triggering condition of the entity recognition model is that the terminal device detects an operation of copying text by the user.
For example, as shown in the interface a in fig. 1, the mobile phone detects the operation of the user copying the address "the coastal highway in the south mountain area" and invokes the entity recognition model.
As another example, the terminal device may be a mobile phone. Fig. 6 shows a schematic diagram of an interface for copying text. As shown in fig. 6, the memo interface displays a plurality of information including www.sousuo.com, 320, 005X, YT 1234567890173, new street, and starbucks. The user selects copy "YT 1234567890173", the mobile phone detects the operation of copying "YT 1234567890173" by the user, and the entity recognition model can be invoked.
In another possible implementation manner, the triggering condition of the entity recognition model is that the terminal device detects a triggering operation of the text recognition icon by a user.
Illustratively, as shown in the interface a in fig. 2, the mobile phone detects the triggering operation of the user on the character recognition icon, and invokes the entity recognition model.
The embodiment of the application detects the text copied by the user or detects the text recognized by the user, and presumes that the user needs to search the service corresponding to the text. If the terminal device does not provide a service for the user, the user needs to search for the copied text in other application programs to find the service, or the user needs to directly input the text in other application programs to find the service, so the terminal device can take the pasted text or the input text of the user in the application programs as a training sample to train the entity recognition model. When the terminal equipment detects that the text copied by the user is the text pasted by the user in the application program or the text input by the user, corresponding service is provided for the user.
The terminal equipment provided by the embodiment of the application can collect the content pasted or input by the user in the preset application program under the condition of user authorization, and takes the content as the training sample of the entity identification model to train the entity identification model so as to improve the identification accuracy of the entity identification model.
The preset application may include a car-taking application, a map application, a shopping application, a phone application, a conference application, an information application, a search application, a news application, a video application, an express application, a ticket-purchasing application, a ticket-checking application, and the like.
The terminal device can collect address class data as training samples through the taxi taking application program, the map class application program and the shopping class application program. The terminal device can collect the number class data as training samples through the telephone class application program, the conference class application program and the information class application program. The terminal device may collect link class data as training samples through a search class application, a news class application, and a video class application. The terminal equipment can collect the express bill number through the express application program as a training sample. The terminal device can collect the shift data of the ticket through the ticket purchasing application program and the ticket checking application program as a training sample.
The terminal device may be a mobile phone, for example. Fig. 7 shows a schematic interface diagram for training sample collection. As shown in an interface a in fig. 7, a user inputs or pastes a website www.sousuo.com in an interface of the search application, and the mobile phone detects an operation of inputting or pasting an input box in the interface of the search application by the user, so that the content of the input box can be collected, that is, the mobile phone can collect the website www.sousuo.com as a training sample.
As shown in interface b of fig. 7, the user inputs or pastes the address of the mountain road, the city of the lakenan province, into the interface of the map-based application program, and searches. The mobile phone detects the operation of inputting or pasting the input frame by the user in the interface of the map application program, and can collect the content of the input frame, namely the mobile phone can collect the address of the website address of the mountain in Hunan province, and the street in the mountain as a training sample. It should be noted that, the search result displayed in the interface by the mobile phone is not the focus of the embodiment of the present application, and the search result is not limited.
As shown in interface c in fig. 7, the user inputs or pastes the flight number MU3125 in the interface of the ticket-checking application program, fills in the tuesday of 22 days of the 2022 year 06 month, and can query the flight number MU3125 through the flight query control. The mobile phone detects the operation of inputting or pasting the input box of the flight number by the user in the interface of the ticket checking application program, and can acquire the content of the input box of the flight number, namely the mobile phone can acquire the flight number MU3125 as a training sample.
As shown in the interface d in fig. 7, the user inputs or pastes the mobile phone number 177 of the recipient in the interface of the information class application, and inputs the information "hello, i is XX" in the information input box. The mobile phone detects the operation of inputting or pasting the input box of the receiver in the interface of the user information application program, and can collect the content of the input box of the receiver, that is, the mobile phone can collect the mobile phone number 177 of the receiver as a training sample.
After the terminal equipment collects the training samples, the training samples can be transmitted to the server, and the server can train the entity identification model based on the training samples, so that the terminal equipment only needs to collect data without training the entity identification model, the computing capacity of the terminal equipment is saved, and meanwhile, the power consumption of the terminal equipment is saved. In addition, the information input or pasted by the user belongs to the privacy of the user, the terminal equipment can encrypt the training sample after collecting the training sample, and the encrypted training sample is transmitted to the server, so that the privacy of the user is protected.
Illustratively, fig. 8 shows a comparison of training samples before and after encryption. As shown in fig. 8, the terminal device may perform operations such as data acquisition, and data reporting on data. The server can perform data integration, data access, data storage, calculation, data application and other operations on the data.
If the terminal device does not encrypt the training sample, the terminal device may encapsulate, count and report the training sample to the server after obtaining the training sample. After receiving the training samples, the server integrates the training samples, collects the training samples through the flight, transmits the training samples to a data warehouse (data warehouse, DW) through an operation data storage (operational data store, ODS), and further transmits the training samples to a data application layer (application DATA SERVICE, ADS) to provide data for artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) learning in the data application. That is, training samples are provided for the entity recognition model. Wherein, the jump is a highly available, highly reliable, distributed system for collecting, aggregating and transmitting mass logs. ODSs may be referred to as data preparation areas, and may be for extracting, washing, and transmitting data. The ODS can transmit data to a Data Warehouse (DW). ADS may be used to save data to provide data for data analysis and data mining.
If the terminal equipment encrypts the training sample, after the terminal equipment acquires the training sample, the terminal equipment can encapsulate the training sample, encrypt the training sample by adopting a differential privacy noise adding mode, and report the encrypted training sample to the server. After receiving the encrypted training samples, the server integrates the encrypted training samples in a differential extraction mode, collects the training samples through a Flume, pre-processes the encrypted training samples through an ODS, decrypts the encrypted training samples in a noise reduction and regression mode to obtain training samples, stores the training samples through an ADS, and provides the training samples for the entity recognition model to train the entity recognition model.
It can be understood that the server may receive data reported by a plurality of terminal devices, and further may integrate the data of the plurality of terminal devices as training samples of the entity recognition model.
The embodiment of the application provides a model training method based on the training sample. The method provided by the embodiment of the application can be suitable for any terminal equipment, wherein the terminal equipment can be wearable terminal equipment such as mobile phones, tablet personal computers, personal computers (personal computer, PC), intelligent screens, artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) sound boxes, headphones, car equipment, intelligent watches and the like, and also can be various teaching auxiliary tools (such as learning machines, early education machines), intelligent toys, portable robots, personal digital assistants (personal DIGITAL ASSISTANT, PDAs), augmented reality (augmented reality, AR) equipment, virtual Reality (VR) equipment and the like, and can also be equipment with a mobile office function, equipment with an intelligent home function, equipment with an audio-video entertainment function, equipment supporting intelligent travel and the like. It should be understood that the embodiment of the present application does not limit the specific technology and the specific device configuration adopted by the terminal device.
In order to better understand the embodiments of the present application, the following describes a hardware structure of the terminal device according to the embodiments of the present application. Fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
The terminal device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a sensor module 180, keys 190, an indicator 192, a camera 193, a display 194, and the like.
Alternatively, the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It will be appreciated that the structure illustrated in the embodiments of the present application does not constitute a specific limitation on the terminal device. In other embodiments of the application, the terminal device may include more or less components than illustrated, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units. Wherein the different processing units may be separate devices or may be integrated in one or more processors. A memory may also be provided in the processor 110 for storing instructions and data. The processor 110 may perform the operations of data acquisition, and data reporting described above with reference to fig. 8.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge a terminal device, or may be used to transfer data between the terminal device and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other terminal devices, such as AR devices, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. The power management module 141 is used for connecting the charge management module 140 and the processor 110.
The wireless communication function of the terminal device may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Antennas in the terminal device may be used to cover single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G or the like applied on a terminal device. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wirelesslocal area networks, WLAN) such as wireless fidelity (WIRELESS FIDELITY, wi-Fi) network, bluetooth (BT), global navigation satellite system (global navigation SATELLITE SYSTEM, GNSS), frequency modulation (frequency modulation, FM), etc. applied on the terminal device.
The terminal device implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the terminal device may include 1 or N display screens 194, N being a positive integer greater than 1.
The terminal device may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The camera 193 is used to capture still images or video. In some embodiments, the terminal device may include 1 or N cameras 193, N being a positive integer greater than 1.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to realize expansion of the memory capability of the terminal device. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer-executable program code that includes instructions. The internal memory 121 may include a storage program area and a storage data area.
The terminal device may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The terminal device can listen to music through the speaker 170A or listen to hands-free calls. A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When the terminal device picks up a call or voice message, the voice can be picked up by placing the receiver 170B close to the human ear. Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The gyro sensor 180B may be used to determine a motion gesture of the terminal device. The air pressure sensor 180C is used to measure air pressure. The magnetic sensor 180D includes a hall sensor. The acceleration sensor 180E may detect the magnitude of acceleration of the terminal device in various directions (typically three axes). A distance sensor 180F for measuring a distance. The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The ambient light sensor 180L is used to sense ambient light level. The fingerprint sensor 180H is used to collect a fingerprint. The temperature sensor 180J is for detecting temperature. The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The bone conduction sensor 180M may acquire a vibration signal.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The terminal device may receive key inputs, generating key signal inputs related to user settings of the terminal device and function control. The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The software system of the terminal device can adopt a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture or a cloud architecture. The layered architecture may adopt an Android (Android) system, an apple (IOS) system, or other operating systems, which is not limited in the embodiment of the present application. Taking an Android system with a layered architecture as an example, a software structure of the terminal device is illustrated.
In order to clearly describe the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
In the present application, the words "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
Furthermore, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association of associated objects, meaning that there may be three relationships, e.g., A and/or B, and that there may be A alone, while A and B are present, and B alone, where A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one of a, b and c may represent a, or b, or c, or a and b, or a and c, or b and c, or a, b and c, wherein a, b, c may be single or plural.
Fig. 10 shows a schematic flow diagram of a model training scheme 1000. The method 1000 may be applied to a communication system of a terminal device and a server. The hardware configuration of the terminal device may be as shown in fig. 9, but the embodiment of the present application is not limited thereto.
As shown in fig. 10, the method 1000 may include the steps of:
s1001, the terminal equipment detects triggering operation of a user on an interface of a preset application.
The preset application may also be referred to as a preset application program, which is not limited in this embodiment of the present application.
The preset applications may include a car-taking application, a map application, a shopping application, a phone application, a conference application, an information application, a search application, a news application, a video application, an express application, a ticket-purchasing application, a ticket-checking application, and the like.
The user clicks the icon of the preset application to open the preset application, or the operation of triggering another interface on one interface of the preset application is the triggering operation of the user on the interface of the preset application.
S1002, the terminal equipment responds to triggering operation of a user on an interface of a preset application, and whether the interface of the preset application is the preset interface is judged.
The terminal device may collect only information of a preset input box in a preset interface in a preset application, where the information is content input or pasted by a user and may be used as a training sample.
The terminal device responds to the triggering operation of the user on the interface of the preset application, and can judge whether the interface of the preset application is the preset interface or not. If the interface of the preset application is not the preset interface, the terminal equipment can determine that the information is not required to be collected, filter the interface which is not required to be collected, and improve the efficiency of collecting the data. If the interface of the preset application is not the preset interface, the terminal device may continue to detect the operation of the user, and when detecting the triggering operation of the user on the interface of the preset application, may continue to execute the step S1001. If the interface of the preset application is the preset interface, the terminal device may detect whether the user inputs information in the preset input box, and then S1003 may be executed.
S1003, detecting input operation of a user on a preset input box in a preset interface by the terminal equipment under the condition that the interface of the preset application is the preset interface.
The preset interface may include one or more input boxes, not all input operations of the input boxes need to be collected, and the developer may set the input box needing to collect information as a preset input box, and the terminal device determines whether the preset input box inputs information.
In the interface a in fig. 7, the preset application is a search application, the preset interface is a search interface, and the input box for inputting the web address by the user is a preset input box. When the user inputs information in the preset input box, the terminal device may detect an input operation of the preset input box by the user. In the interface b in fig. 7, the preset application is a map application program, the preset interface is a search interface, and the input box of the user input address is a preset input box. When the user inputs information in the preset input box, the terminal device may detect an input operation of the preset input box by the user. In the interface c in fig. 7, the preset interface is a flight dynamic interface, and the preset input box is an input box corresponding to a flight number. When the user inputs information in the input box corresponding to the flight number, the terminal device can detect the input operation of the user on the preset input box. In the interface d in fig. 7, the preset interface is a new information interface, and the preset input box is an input box corresponding to the recipient. When the user inputs information in the input box corresponding to the recipient, the terminal device can detect the input operation of the user on the preset input box.
S1004, the terminal equipment responds to input operation of a user on a preset input box in a preset interface, and input information is obtained.
Illustratively, in the interface a in fig. 7, the input information acquired by the terminal device is www.sousuo.com. In the interface b in fig. 7, the input information acquired by the terminal device is the mountain co-channel of the mountain in the city of hunan. In the interface c in fig. 7, the input information acquired by the terminal device is MU3125. In the d interface in fig. 7, the input information acquired by the terminal device is 177×1269.
After the terminal device acquires the input information, the input information may be input to the entity recognition model, or the input information may be matched with the dictionary, that is, S1005 and S1014 are performed. Wherein the dictionary includes information input by the user history.
S1005, the terminal equipment inputs the input information into the entity recognition model.
The entity recognition model may be an initial entity recognition model, i.e. the parameters are initial parameters, which have not been trained yet. The entity identification model may be preset in the terminal device or may be requested by the terminal device from the server, which is not limited in the embodiment of the present application.
If the entity identification model is preset in the terminal equipment, training can be directly performed after input information is acquired, a request to a server is not needed, and the training speed can be improved. If the entity identification model is requested by the terminal equipment to the server, the terminal equipment does not need to store the entity identification model, so that the memory space can be saved.
S1006, the terminal equipment judges whether the entity identification model is successfully identified.
If the entity recognition model is the initial entity recognition model, the entity recognition model does not have recognition capability at this time, and the type of the input information cannot be successfully recognized. If the identification is unsuccessful, the identification can be used as a training sample to train an initial entity identification model.
Illustratively, in the interface c in fig. 7, the user input information is the MU3125, and if the entity recognition model can recognize the MU3125 as the flight number, the recognition is successful. If the entity recognition model cannot recognize the MU3125 as a flight number, the recognition is unsuccessful.
If the entity recognition model has recognition capability, the entity recognition model can successfully recognize the entity to obtain the text type of the input information, and can acquire the service corresponding to the input information, that is, execute S1007.
S1007, if the entity identification model is successfully identified, the terminal device can acquire the service corresponding to the input information.
If the entity identification model is successfully identified, the terminal equipment can determine the corresponding service according to the input information and display the service in the form of a suspension capsule or a service card.
S1008, if the entity identification model is not successfully identified, the terminal equipment encrypts the input information to obtain the encrypted input information.
The terminal equipment can encrypt the input information in a differential privacy noise adding mode to obtain the encrypted input information.
S1009, the terminal device transmits the encrypted input information to the server, and correspondingly, the server receives the encrypted input information.
After obtaining the encrypted input information, the terminal device may immediately transmit the encrypted input information to the server, or may periodically transmit the encrypted input information to the server.
If the terminal equipment immediately transmits the encrypted input information to the server, the server can update the entity identification model in real time, which is beneficial to improving the training speed of the entity identification model. If the terminal device periodically transmits the encrypted input information to the server, the information transmitted at one time can be very much, and the signaling overhead can be saved.
S1010, the server decrypts the encrypted input information to obtain the input information.
After receiving the encrypted input information, the server can decrypt the encrypted input information in a noise reduction and regression mode to obtain the input information.
S1011, the server trains the entity recognition model according to the input information and the corresponding type.
The type corresponding to the input information can be manual annotation, or can be automatic annotation by adopting the existing annotation technology, and the embodiment of the application is not limited to the manual annotation.
The input information and the corresponding type are used as training samples to train the entity recognition model, and the entity recognition model can be said to be updated. Specifically, the server may take the input information as an input of the entity recognition model, and take a type corresponding to the input information as an output of the entity recognition model, so as to update parameters in the entity recognition model.
The above-mentioned S1001 to S1011 are processes of training the entity recognition model. The number of training processes is not limited, and it can be understood that the more the number of training processes is, the larger the training sample is, and the higher the recognition accuracy of the entity recognition model is.
The above S1001 to S1011 may be a process of applying the entity recognition model. The terminal equipment can update the entity recognition model by taking the information which cannot be recognized as a training sample, so that the recognition accuracy of the entity recognition model is continuously improved.
It should be noted that, the server may receive information sent by different terminal devices, and the server may train the entity recognition model based on the information sent by different terminal devices, so as to improve the training process of the entity recognition model.
S1012, the terminal device may send a request message to the server, where the request message is used to request the trained entity identification model, and correspondingly, the server may receive the request message.
The terminal device may periodically send a request message to the server to obtain the latest entity recognition model, i.e. the trained entity recognition model.
S1013, the server sends the trained entity recognition model to the terminal equipment based on the request message.
And the server sends the trained entity recognition model to the terminal equipment based on the request message so as to enable the terminal equipment to acquire the latest entity recognition model.
S1014, the terminal equipment matches the input information with the dictionary.
The dictionary includes information input by the user history, and the terminal device matches the input information with the dictionary, that is, determines whether the information input by the user has been previously input. The service to which the information in the dictionary corresponds is known.
The dictionary may also be referred to as a dictionary or a hot repair channel, which is not limited by embodiments of the present application. The dictionary may be preset in the terminal device, or may be requested by the terminal device to the server, or may be actively issued by the server to the terminal device, which is not limited in the embodiment of the present application.
If the dictionary is preset in the terminal equipment, the dictionary can be directly matched after the input information is acquired, and the matching speed can be improved without requesting a server. If the dictionary is requested by the terminal equipment to the server, the terminal equipment does not need to store the dictionary, so that the memory space can be saved. If the server actively transmits the dictionary to the terminal equipment, the terminal equipment does not need to actively request the dictionary, so that signaling overhead can be saved.
S1015, the terminal equipment judges whether the input information is successfully matched with the dictionary.
If the input information is not previously inputted by the user, the input information and the dictionary cannot be successfully matched, and the dictionary may be updated, that is, S1016 is performed. If the input information was previously input by the user, the input information may be successfully matched with the dictionary, and since the service corresponding to the information in the dictionary is known, the service corresponding to the input information may be acquired, i.e., the execution S1007 may be performed.
Illustratively, in the interface c in fig. 7, the user input information is MU3125, and if the dictionary includes MU3125, the matching is successful. If the MU3125 is not included in the dictionary, the match is unsuccessful.
S1016, if the input information is not successfully matched with the dictionary, the terminal equipment can update the dictionary through the input information to obtain an updated dictionary.
The terminal device may update the dictionary by adding the input information to the dictionary.
S1017, the terminal equipment can encrypt the updated dictionary to obtain an encrypted dictionary.
The terminal device may encrypt the updated dictionary by using a differential privacy noise adding manner, to obtain an encrypted dictionary. It is understood that the encrypted dictionary refers to an encrypted and updated dictionary.
S1018, the terminal device transmits the encrypted dictionary to the server, and correspondingly, the server receives the encrypted dictionary.
After the terminal device obtains the encrypted dictionary, the encrypted dictionary may be immediately transmitted to the server, or the encrypted dictionary may be periodically transmitted to the server, which is not limited in the embodiment of the present application.
If the terminal equipment immediately transmits the encrypted dictionary to the server, the server can acquire the updated dictionary in real time, which is beneficial to improving the updating speed of the dictionary. If the terminal device periodically transmits the encrypted dictionary to the server, the data of the dictionary transmitted at one time can be very much, and the signaling overhead can be saved. The terminal device may also send the updated dictionary to the server when idle (i.e., preset condition).
S1019, the server decrypts the encrypted dictionary to obtain an updated dictionary.
After receiving the encrypted dictionary, the server can decrypt the encrypted dictionary in a noise reduction and regression mode to obtain an updated dictionary. The server can annotate the text type of the newly added information in the dictionary or identify the newly added information through the entity identification model to obtain the text type of the newly added information.
It will be appreciated that the services to which the information in the dictionary corresponds are known.
S1020, the server stores the updated dictionary.
The server may receive the dictionary transmitted from the plurality of terminal apparatuses, that is, the dictionary stored in the server may include information input by a plurality of user histories, and when a certain terminal apparatus transmits a request message to the server to request the acquisition of the dictionary, the server may transmit the dictionary including the information input by the plurality of user histories to the certain terminal apparatus.
S1014 to S1020 are processes for updating the dictionary. The embodiment of the application does not limit the number of times of updating the dictionary, and it can be understood that the more the number of times of updating is, the larger the dictionary contains information, and the larger the probability of successful matching of the user input information and the dictionary is.
The above S1014 to S1020 may be a process of applying a dictionary. The terminal device may add information not included in the dictionary to update the dictionary, and continuously increase the information of the dictionary to increase the probability of successful matching, so as to provide corresponding services for the user.
It should be noted that, in the above-mentioned S1005 to S1009 and S1014 to 1018, the terminal device may execute at the same time, or may execute only S1005 to S1009 or S1014 to 1018, which is not limited in the embodiment of the present application.
It should be noted that, in the case where S1005 to S1009 and S1014 to 1018 are executed simultaneously, if the entity recognition model can recognize the text type of the input information, but the matching between the input information and the dictionary is unsuccessful, the terminal device may add the input information to the dictionary and save the correspondence between the input information and the text type output by the entity recognition model. The terminal device may also transmit the correspondence between the input information and the text type output by the entity recognition model to the server.
According to the model training method provided by the embodiment of the application, the information of the preset input frame in the preset interface of the preset application is collected and used as a training sample to train the entity recognition model, namely the real requirement of the user is used as the training sample, and the trained entity recognition model has stronger suitability with the user and higher recognition accuracy. According to the embodiment of the application, the training sample is collected through the terminal equipment, and the server is used for training, so that the computing capacity and the power consumption of the terminal equipment can be saved. The embodiment of the application can also store the information input by the user history as a dictionary, is favorable for memorizing the use habit of the user, can more quickly determine the intention of the user for inputting the information, and can conveniently provide services for the user. In addition, when information is transmitted between the embodiment of the application and the server, the privacy of the user is protected by adopting an encryption mode, and the privacy of the user is prevented from being revealed.
Optionally, in the above method 1000, the terminal device requests the trained entity recognition model from the server, so as to recognize the text type of the text through the entity recognition model, which is just one possible implementation manner. In another possible implementation, the terminal device may not request the trained entity recognition model from the server, and after performing S1011, the terminal device transmits the encrypted input information to the server. The server decrypts the encrypted input information to obtain the input information, and recognizes the text type of the text through the trained entity recognition model, and sends the text type of the text to the terminal equipment.
In the implementation mode, the terminal equipment does not need to store the trained entity identification model, so that the memory space can be saved.
As an alternative embodiment, the preset input box may only allow information of a specific text type to be input, and the terminal device may use the text type corresponding to the input information and the preset input box as the training sample.
According to the implementation mode, input information does not need to be marked, and the training efficiency of the model can be improved.
The entity identification model can identify the input information (or input text) as a website, an address, a mobile phone number, a flight number, an identity card number or an express bill number and other types. In one possible implementation, the entity recognition model may include a website recognition model, an address recognition model, a cell phone number recognition model, a flight number recognition model, an identification card number recognition model, and an express bill number recognition model. The terminal device can collect address data from the driving application program, the map application program and the shopping application program as training samples to train the address recognition model in the entity recognition model. The terminal equipment can collect number class data through a telephone class application program, a conference class application program and an information class application program to serve as training samples, and a mobile phone number identification model in the entity identification model is trained. The terminal equipment can collect link class data through the search class application program, the news class application program and the video class application program as training samples to train the website identification model in the entity identification model. The terminal equipment can collect the express bill number through the express bill application program to serve as a training sample, and train an express bill number identification model in the entity identification model. The terminal equipment can collect the shift data of the tickets through the ticket purchasing application program and the ticket checking application program as training samples, and train the flight number identification model in the entity identification model.
Illustratively, FIG. 11 shows a block diagram of an entity recognition model. As shown in FIG. 11, the entity recognition model may include a canonical entity recognition model, a point of interest (point of interest, POI) resolution model, a standard address rule resolution model, a non-standard address resolution model, and an entity correction model. The regular entity recognition model is used for recognizing texts of the types such as websites, base phone numbers, mobile phone numbers, flight numbers, identity card numbers, express bill numbers, mailboxes and the like. The POI analysis model is used for identifying texts of scenic spot names, restaurants, hospitals, office buildings, shops, bus stops and the like. The standard address rule parsing model is used to identify text of a standard address type, for example, a coastal large road in a southbound region. The non-standard address resolution model is used to identify text of a non-standard address type. The entity error correction model is used for correcting the identified address, website, mailbox and other more regular texts. For example, text is error corrected when the provinces and regions in the address are not up.
The terminal device may use the address data collected based on the driving application, the map application and the shopping application as a training sample of the address analysis model, and use the place data collected based on the driving application, the map application and the shopping application as a training sample of the POI analysis model. The terminal equipment can collect the number class data collected based on the telephone class application program, the conference class application program and the information class application program, collect the link class data through the search class application program, the news class application program and the video class application program, collect the express bill number collected through the express class application program and collect the data such as the shift of the ticket through the ticket purchasing class application program and the ticket checking class application program as training samples of the regular entity identification model.
In the application process, the terminal equipment can identify the input information through the entity identification model, namely, the input information is identified by utilizing the regular entity identification model, the POI analysis model, the standard address rule analysis model and the non-standard address analysis model, if the input information is identified as an address, a website or a mailbox, the terminal equipment can judge whether the input information is correct or not through the entity error correction model, if the input information is incorrect, the input information is corrected, and the corrected input information and the identified result are fused to obtain an output result. If the input information is not an address, a website or a mailbox, the terminal equipment can directly fuse the results of the models without correcting the input information to obtain an output result. The terminal equipment identifies the input information through a standard address rule analysis model, and if the input information hits the labeling address rule, the input information is identified as an address. If the input information does not hit the labeling address rule, the terminal device can identify the input information through a non-standard address analysis model, if the input information is identified as an address, and if the input information is identified as not an address, the terminal device can determine that the input information is not the address. That is, the terminal device may determine that the input information is an address in the case where the input information hits the standard address rule, or the input information does not hit the standard address rule, but the output information of the non-standard address resolution model is that the input information is an address. The terminal device may determine that the input information is not an address in the case that the input information misses the standard address rule and that the output information of the non-standard address resolution model is that the input information is not an address.
The sequence numbers of the above-mentioned processes do not mean the sequence of execution sequence, and the execution sequence of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
The method provided by the embodiment of the present application is described in detail above with reference to fig. 1 to 11, and the terminal device provided by the embodiment of the present application will be described in detail below with reference to fig. 12 and 13.
Fig. 12 shows a schematic flow chart of a terminal device 1200 provided by an embodiment of the present application. The terminal device 1200 includes a processing module 1210 and an acquisition module 1220. The terminal device 1200 may be configured to perform the various methods described above. For example, terminal device 1200 may perform method 1000 described above.
It should be understood that the terminal device 1200 herein is embodied in the form of functional modules. The term module herein may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (e.g., a shared, dedicated, or group processor, etc.) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that support the described functionality. In an alternative example, it will be understood by those skilled in the art that the terminal device 1200 may be specifically a terminal device in the foregoing method embodiment, or the functions of the terminal device in the foregoing method embodiment may be integrated in the terminal device 1200, and the terminal device 1200 may be used to execute each flow and/or step corresponding to the terminal device in the foregoing method embodiment, which is not repeated herein for avoiding repetition.
The terminal device 1200 has a function of implementing the corresponding steps executed by the terminal device in the method embodiment, where the function may be implemented by hardware, or may be implemented by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In an embodiment of the present application, the terminal device 1200 in fig. 12 may also be a chip or a system on chip (SoC), for example.
Fig. 13 is a schematic block diagram of another terminal device 1300 provided by an embodiment of the present application. The terminal device 1300 includes a processor 1310, a transceiver 1320, and a memory 1330. Wherein the processor 1310, the transceiver 1320 and the memory 1330 communicate with each other through an internal connection path, the memory 1330 is configured to store instructions, and the processor 1320 is configured to execute the instructions stored in the memory 1330 to control the transceiver 1320 to transmit signals and/or receive signals.
It should be understood that the terminal device 1300 may be specifically a terminal device in the above method embodiment, or the functions of the terminal device in the above method embodiment may be integrated in the terminal device 1300, and the terminal device 1300 may be used to perform the steps and/or flows corresponding to the terminal device in the above method embodiment. The memory 1330 may optionally include read-only memory and random access memory, and provide instructions and data to the processor. A portion of the memory may also include non-volatile random access memory. For example, the memory may also store information of the device type. The processor 1310 may be configured to execute instructions stored in the memory, and when the processor executes the instructions, the processor may perform steps and/or flows corresponding to the terminal device in the foregoing method embodiments.
It is to be appreciated that in embodiments of the application, the processor 1310 may be a central processing unit (central processing unit, CPU), which may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in a hardware processor for execution, or in a combination of hardware and software modules in the processor for execution. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor executes instructions in the memory to perform the steps of the method described above in conjunction with its hardware. To avoid repetition, a detailed description is not provided herein.
The application also provides a computer readable storage medium for storing a computer program for implementing the method corresponding to the terminal device in the method embodiment.
The application also provides a chip system which is used for supporting the terminal equipment to realize the functions shown in the embodiment of the application in the embodiment of the method.
The present application also provides a computer program product comprising a computer program (which may also be referred to as code, or instructions) which, when run on a computer, is adapted to perform the method corresponding to the terminal device shown in the above-mentioned method embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and module may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific implementation of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and all changes and substitutions are included in the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of model training, comprising:
Detecting triggering operation of a user on an interface of a preset application;
Judging whether the interface of the preset application is a preset interface or not in response to the triggering operation of the user on the interface of the preset application;
If the interface of the preset application is the preset interface, detecting the input operation of a preset input box in the preset application of the terminal equipment by a user;
responding to the input operation, and acquiring input information;
Encrypting the input information in a differential privacy and noise adding mode to obtain the encrypted input information;
The encrypted input information is sent to a server, the server trains an entity recognition model based on the input information marked with text types to obtain the trained entity recognition model, the entity recognition model is used for recognizing the text types of text information, the text information is obtained by responding to text copying operation or text triggering operation of text recognition icons of a user, services corresponding to the text types are target services provided for the user by the terminal equipment based on the text information, and the target services are displayed in a capsule or service card mode;
the preset application comprises at least one of the following applications:
A taxi taking application, a map application, a shopping application, a telephone application, a conference application, an information application, a search application, a news application, a video application, an express application, a ticket purchasing application or a ticket checking application;
The text type includes at least one of:
website, flight number, mobile phone number, seat number, express bill number, mailbox, scenic spot, restaurant, hospital, office building, shop or bus station;
The method further comprises the steps of:
Sending a first request message to the server, wherein the first request message is used for requesting a trained recognition model;
Receiving the trained recognition model from the server;
detecting a copying operation of a user on a first text;
Responding to the copy operation of the first text by the user, and inputting the first text into the trained recognition model;
determining the text type of the first text according to the output information of the trained recognition model;
If the text type of the first text is an address, judging whether error information exists in the first text;
if error information exists, correcting the first text to obtain corrected first text;
outputting output information of the trained recognition model, wherein the output information comprises the text type of the first text and the first text subjected to error correction;
the trained recognition model comprises a standard address rule analysis model and a non-standard address analysis model, and the step of inputting the first text into the trained recognition model comprises the following steps:
inputting the first text into the standard address rule analysis model;
If the first text does not accord with the rule of the standard address rule analysis model, inputting the first text into the non-standard address analysis model;
the determining the text type of the first text according to the output information of the trained recognition model comprises the following steps:
Determining the text type of the first text according to the output information of the non-standard address resolution model;
If the first text accords with the rule of the standard address rule analysis model, determining the text type of the first text according to the output information of the trained recognition model comprises the following steps:
And determining the text type of the first text according to the output information of the standard address rule analysis model.
2. The method according to claim 1, wherein the method further comprises:
And determining a target service of the first text according to the text type of the first text.
3. The method according to claim 1, wherein the method further comprises:
acquiring a second text in the first picture, wherein the second text is acquired in response to a triggering operation of a user on a character recognition icon;
Inputting the second text into the trained recognition model;
determining the text type of the second text according to the output information of the trained recognition model;
and determining a target service of the second text according to the text type of the second text.
4. A method according to any one of claims 1 to 3, wherein said entering said first text into said trained recognition model comprises:
If the text type of the first text is a website or a mailbox, judging whether error information exists in the first text;
if error information exists, correcting the first text to obtain corrected first text;
And outputting output information of the trained recognition model, wherein the output information comprises the text type of the first text and the first text subjected to error correction.
5. A method according to any one of claims 1 to 3, further comprising:
and updating a dictionary by taking the input information as a training sample to obtain an updated dictionary, wherein the dictionary comprises information input by a user in a history way.
6. The method of claim 5, wherein the method further comprises:
And under a preset condition, sending the updated dictionary to a server, wherein the preset condition is used for indicating that the terminal equipment is idle.
7. The method of claim 6, wherein the updated dictionary is a first dictionary;
the sending the updated dictionary to a server includes:
Encrypting the first dictionary to obtain the encrypted first dictionary;
and sending the encrypted first dictionary to the server.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
Sending a second request message to the server, wherein the second request message is used for requesting the updated dictionary;
The updated dictionary is received from a server, the updated dictionary being determined based on the dictionary of the plurality of terminal devices.
9. The method of claim 8, wherein the method further comprises:
Detecting a copying operation of a user on the third text;
responding to the copying operation of the user on the third text, and judging whether the updated dictionary comprises the third text or not;
And if the updated dictionary comprises the third text, determining a target service of the third text.
10. The method of claim 8, wherein the method further comprises:
Acquiring a fourth text in the second picture, wherein the fourth text is acquired in response to a triggering operation of a user on a character recognition icon;
judging whether the updated dictionary comprises the fourth text;
and if the updated dictionary comprises the fourth text, determining a target service of the fourth text.
11. A terminal device comprising a processor coupled to a memory for storing a computer program which, when invoked by the processor, causes the terminal device to perform the method of any one of claims 1 to 10.
12. A computer readable storage medium storing a computer program comprising instructions for implementing the method of any one of claims 1 to 10.
13. A computer program product comprising computer program code embodied therein, which when run on a computer causes the computer to carry out the method according to any one of claims 1 to 10.
CN202210978950.6A 2022-08-16 2022-08-16 Model training method and terminal equipment Active CN116092098B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210978950.6A CN116092098B (en) 2022-08-16 2022-08-16 Model training method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210978950.6A CN116092098B (en) 2022-08-16 2022-08-16 Model training method and terminal equipment

Publications (2)

Publication Number Publication Date
CN116092098A CN116092098A (en) 2023-05-09
CN116092098B true CN116092098B (en) 2025-05-06

Family

ID=86205159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210978950.6A Active CN116092098B (en) 2022-08-16 2022-08-16 Model training method and terminal equipment

Country Status (1)

Country Link
CN (1) CN116092098B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909145A (en) * 2019-11-29 2020-03-24 支付宝(杭州)信息技术有限公司 Training method and device for multi-task model
CN112115342A (en) * 2020-09-22 2020-12-22 深圳市欢太科技有限公司 Search method, search device, storage medium and terminal
CN112989035A (en) * 2020-12-22 2021-06-18 平安普惠企业管理有限公司 Method, device and storage medium for recognizing user intention based on text classification
CN114330483A (en) * 2021-11-11 2022-04-12 腾讯科技(深圳)有限公司 Data processing method and model training method, device, equipment, storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909145A (en) * 2019-11-29 2020-03-24 支付宝(杭州)信息技术有限公司 Training method and device for multi-task model
CN112115342A (en) * 2020-09-22 2020-12-22 深圳市欢太科技有限公司 Search method, search device, storage medium and terminal
CN112989035A (en) * 2020-12-22 2021-06-18 平安普惠企业管理有限公司 Method, device and storage medium for recognizing user intention based on text classification
CN114330483A (en) * 2021-11-11 2022-04-12 腾讯科技(深圳)有限公司 Data processing method and model training method, device, equipment, storage medium

Also Published As

Publication number Publication date
CN116092098A (en) 2023-05-09

Similar Documents

Publication Publication Date Title
JP5871976B2 (en) Mobile imaging device as navigator
US20240086476A1 (en) Information recommendation method and related device
CN109556621B (en) Route planning method and related equipment
US10499207B2 (en) Service providing system including display device and mobile device, and method for providing service using the same
KR101722687B1 (en) Method for providing information between objects or object and user, user device, and storage medium thereof
EP3537312B1 (en) Geocoding personal information
CN103970825B (en) Method and electronic device for providing information in an information providing system
US20080104649A1 (en) Automatic association of reference data with primary process data based on time and shared identifier
CN103916473B (en) Travel information processing method and relevant apparatus
US11274932B2 (en) Navigation method, navigation device, and storage medium
US20250240610A1 (en) Mobile information terminal, information presentation system and information presentation method
CN110929176B (en) Information recommendation method, device and electronic equipment
CN109218982A (en) Scenic spot information acquisition method and device, mobile terminal and storage medium
CN110457571B (en) Method, device and equipment for acquiring interest point information and storage medium
CN108390998A (en) A kind of method and mobile terminal for sharing file
US20110225151A1 (en) Methods, devices, and computer program products for classifying digital media files based on associated geographical identification metadata
CN112269939B (en) Automatic driving scene searching method, device, terminal, server and medium
CN116092098B (en) Model training method and terminal equipment
CN108241678B (en) Method and device for mining point of interest data
JP6066824B2 (en) Post information display system, server, terminal device, post information display method and program
CN112417323B (en) Arrival behavior detection method, device and computer equipment based on point of interest information
KR102328015B1 (en) Electronic device, server and method for providing traffic information
KR102366773B1 (en) Electronic business card exchanging system using mobile terminal and method thereof
KR102188008B1 (en) Electronic device, method and system for providing contents relate to traffic
KR101854665B1 (en) Electronic device, server, method and system for providing user contents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Applicant after: Honor Terminal Co.,Ltd.

Address before: 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong

Applicant before: Honor Device Co.,Ltd.

Country or region before: China

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant