CN114172856B - Message automatic replying method, device, equipment and storage medium - Google Patents
Message automatic replying method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN114172856B CN114172856B CN202111446619.1A CN202111446619A CN114172856B CN 114172856 B CN114172856 B CN 114172856B CN 202111446619 A CN202111446619 A CN 202111446619A CN 114172856 B CN114172856 B CN 114172856B
- Authority
- CN
- China
- Prior art keywords
- client
- reply
- recording
- message
- sound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000007613 environmental effect Effects 0.000 claims abstract description 58
- 230000033001 locomotion Effects 0.000 claims abstract description 39
- 238000012544 monitoring process Methods 0.000 claims abstract description 8
- 238000004590 computer program Methods 0.000 claims description 9
- 238000001514 detection method Methods 0.000 claims description 5
- 230000008719 thickening Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 8
- 238000013473 artificial intelligence Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/02—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail using automatic reactions or user delegation, e.g. automatic replies or chatbot-generated messages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Environmental & Geological Engineering (AREA)
- Human Computer Interaction (AREA)
- Information Transfer Between Computers (AREA)
Abstract
The invention relates to artificial intelligence technology, and discloses an automatic message replying method, which comprises the following steps: monitoring the duration of non-replying after the client receives the external message; when the time length is greater than a preset time length threshold value, acquiring the movement speed of equipment where the client is located and acquiring the environmental volume of the equipment where the client is located; inquiring from a preset scene parameter table to obtain the motion speed and the equipment scene corresponding to the environment volume; counting historical average reply time length of the client under the equipment scene, and generating reply corpus according to the historical average reply time length and the equipment scene; and replying the external message by using the reply corpus. In addition, the invention also relates to a blockchain technology, and a scene parameter table can be stored in nodes of the blockchain. The invention also provides an automatic message replying device, electronic equipment and a storage medium. The invention can improve the display effect of the user state information when the message is automatically replied.
Description
Technical Field
The present invention relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for automatically replying to a message, an electronic device, and a computer readable storage medium.
Background
In modern life, IM (Instant Messaging, i.e. instant messaging) messaging is applied to many scenarios, such as: when instant messaging, QQ and various other software are used, but after a message is sent to a user B by a user A in instant messaging, the user B can only see that the message is unread or the user B can only see that the message is read but not replied in an unread state, so that in the current instant messaging, if the user A sends the message to the user B, the user B does not reply all the time, so that the user A can fall into an urgent waiting state due to the fact that the user B is unaware, and the message communication is not facilitated, and therefore, how to better display the state information of the opposite user when the user performs instant messaging becomes a problem to be solved urgently.
Disclosure of Invention
The invention provides an automatic message reply method, an automatic message reply device and a computer readable storage medium, which mainly aim to solve the problem that the display effect of user state information is poor when the automatic message reply is carried out.
In order to achieve the above object, the present invention provides a message automatic reply method, including:
monitoring the duration of non-replying after the client receives the external message;
when the time length is greater than a preset time length threshold value, acquiring the movement speed of equipment where the client is located and acquiring the environmental volume of the equipment where the client is located;
inquiring from a preset scene parameter table to obtain the motion speed and the equipment scene corresponding to the environment volume;
counting historical average reply time length of the client under the equipment scene, and generating reply corpus according to the historical average reply time length and the equipment scene;
and replying the external message by using the reply corpus.
Optionally, the obtaining the motion speed of the device where the client is located and obtaining the environmental volume of the device where the client is located includes:
acquiring the instantaneous speeds of the equipment where the client is located at N time points in a preset time period, and determining the average value of the N instantaneous speeds as the movement speed of the equipment, wherein N is a positive integer;
acquiring the environmental record of the equipment in the preset time period, identifying the record volume of each record frame in the environmental record, and collecting the record volumes of all the record frames into the environmental volume.
Optionally, the querying the device scene corresponding to the motion speed and the environmental volume from a preset scene parameter table includes:
constructing an index of the scene parameter table;
and retrieving from the scene parameter table according to the index, the motion speed and the environment volume to obtain a device scene corresponding to the motion speed and the environment volume.
Optionally, the generating a reply corpus according to the historical average reply duration and the device scene includes:
judging whether the environment sound record contains the sound of the user of the client;
when the environment sound record contains the sound of the user of the client, separating a sound record segment containing the user of the client from the environment sound record;
identifying the user speech speed of the recording section, and calculating the expected reply duration according to the user speech speed and the historical average reply duration;
collecting the expected reply time length and the equipment scene into reply corpus;
and when the environment sound record does not contain the sound of the user of the client, collecting the historical average reply time length and the equipment scene into a reply corpus.
Optionally, the determining whether the sound of the user of the client appears in the environmental record includes:
windowing each recording frame in the environment recording, and selecting one of the recording frames from the environment recording one by one;
mapping the selected recording frame into a voice time domain graph, and counting the peak value, amplitude value and zero crossing rate of the voice time domain graph;
calculating the frame energy of the target frame according to the peak value and the amplitude value, and collecting the frame energy and the zero crossing rate into the time domain characteristics of the selected recording frame;
calculating a distance value of the time domain feature and a preset standard feature corresponding to a user of the client;
if the distance value between the recording frame and the standard feature is smaller than a preset distance threshold value, determining that the environment recording does not contain the sound of the user of the client;
and if the distance value between any recording frame and the standard feature in the environment recording is smaller than a preset distance threshold, determining that the environment recording contains the sound of the user of the client.
Optionally, the separating the recording segment including the user of the client from the environmental recording includes:
determining the position information of all recording frames with the distance value smaller than a preset distance threshold value in the environment recording;
and intercepting the environment sound recording according to the position information to obtain a sound recording section containing the user of the client.
Optionally, the replying to the external message using the reply corpus includes:
performing text thickening treatment on the reply corpus to obtain thickened text;
and replying the thickened text to the external message in a highlight color.
In order to solve the above problems, the present invention also provides an automatic message reply device, which includes:
the time length detection module is used for monitoring the time length of non-replying after the client receives the external message;
the data acquisition module is used for acquiring the movement speed of the equipment where the client is located and acquiring the environmental volume of the equipment where the client is located when the time length is greater than a preset time length threshold;
the scene query module is used for querying and obtaining the motion speed and the equipment scene corresponding to the environmental volume from a preset scene parameter table;
the corpus generation module is used for counting historical average reply time length of the client under the equipment scene and generating reply corpus according to the historical average reply time length and the equipment scene;
and the message reply module is used for replying the external message by using the reply corpus.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the message auto-reply method described above.
In order to solve the above-mentioned problems, the present invention also provides a computer-readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-mentioned message auto-reply method.
When the client receives the external message, the embodiment of the invention can judge the equipment scene of the client according to the movement speed and the environmental volume of the equipment of the client, and generate corresponding reply corpus according to the equipment scene and the historical average reply time length under the equipment scene to reply the external message, thereby realizing automatic reply by combining the current environmental state and the expected reply time of the user of the client, improving the information quantity of the reply content and improving the display effect of the user state during the automatic reply. Therefore, the message automatic reply method, the device, the electronic equipment and the computer readable storage medium can solve the problem that the display effect of the user state information is poor when the message automatic reply is carried out.
Drawings
FIG. 1 is a flow chart illustrating a message auto-reply method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for calculating a matching value according to an embodiment of the present invention;
FIG. 3 is a flow chart illustrating selecting a second user representation according to an embodiment of the present invention;
FIG. 4 is a functional block diagram of an automatic message reply device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device for implementing the message auto-reply method according to an embodiment of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides an automatic message replying method. The execution body of the message automatic reply method includes, but is not limited to, at least one of a server, a terminal and the like capable of being configured to execute the method provided by the embodiment of the application. In other words, the message auto-reply method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like. The server may be an independent server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
Referring to fig. 1, a flow chart of a message auto-reply method according to an embodiment of the invention is shown. In this embodiment, the message automatic reply method includes:
s1, monitoring the duration of non-replying after the client receives the external message.
In the embodiment of the present invention, the client may be any software or hardware device having a message interaction function.
For example, the client comprises a mobile phone, a personal computer, a tablet computer and other hardware devices with a direct message interaction function; the client also comprises software (WeChat, QQ, weChat and the like) installed in hardware devices such as a mobile phone, a personal computer, a tablet computer and the like.
In one practical application scenario of the present invention, the client may not be able to reply to all external messages in time when receiving an external message, for example, the user of the client is driving a car or is in a meeting, so that in order to enable the sender of the external message to know the state of the user in time, the client may monitor the duration of non-reply after receiving the external message.
In the embodiment of the invention, the time length of which the client receives the external message and is not replied can be detected by using the data embedded point of the client, wherein the data embedded point is a step tool which is generated in the client in advance, the time length of which the client receives the external message and is not replied is monitored by the data embedded point, and the real-time performance of the detected time length of which the client receives the external message and is not replied can be improved.
In other embodiments of the present invention, the kafka message middleware may also be used to monitor the duration of time that the client has not replied to after receiving the external message.
In detail, the kafka message middleware can realize real-time capturing of data, so that the timeliness of the monitored time length of which the client does not reply after receiving the external message can be improved by utilizing the time length of which the client does not reply after receiving the external message, and further the accuracy of automatically replying the external message is improved.
S2, when the time length is greater than a preset time length threshold, acquiring the movement speed of the equipment where the client is located, and acquiring the environmental volume of the equipment where the client is located.
In the embodiment of the invention, when the time length is greater than the preset time length threshold, the user of the client is not timely replied to the external message received by the client, so that the parameters such as the movement speed, the environmental volume and the like of the equipment where the client is located can be acquired, and the automatic reply to the external message can be conveniently carried out according to the acquired parameters of the client.
In the embodiment of the present invention, the obtaining the motion speed of the device where the client is located and obtaining the environmental volume of the device where the client is located includes:
acquiring the instantaneous speeds of the equipment where the client is located at N time points in a preset time period, and determining the average value of the N instantaneous speeds as the movement speed of the equipment, wherein N is a positive integer;
acquiring the environmental record of the equipment in the preset time period, identifying the record volume of each record frame in the environmental record, and collecting the record volumes of all the record frames into the environmental volume.
In detail, the speed sensor of the device where the client is located may be pre-installed to obtain the instantaneous speeds of the device where the client is located at N time points within a preset time period, so as to calculate an average value of the instantaneous speeds at each time point, and the calculated average value is used as the movement speed of the device.
And similarly, the sound sensor of the equipment where the client is located can be pre-installed to acquire the environmental record of the equipment where the client is located in a preset time period, and the record volume of each record frame in the environmental record is identified through instruments with volume identification functions such as a preset decibel instrument, so that the record volumes of all record frames are collected together to obtain the environmental volume.
For example, the device where the client is located includes an environmental recording frame a, an environmental recording frame B, and an environmental recording frame C in a preset time period, where the recording volume of the environmental recording frame a is 20db, the recording volume of the environmental recording frame B is 30db, and the recording volume of the environmental recording frame C is 40db, and then the recording volumes of the environmental recording frame a, the environmental recording frame B, and the environmental recording frame C may be collected into the form of the sum of the following data sets:
and S3, inquiring from a preset scene parameter table to obtain the motion speed and the equipment scene corresponding to the environment volume.
In the embodiment of the present invention, the scene parameter table is a table that is generated in advance and is used for parameter marking of multiple device scenes of the device where the client is located, where the scene parameter table includes multiple scenes where the device where the client is located may be located, and scene parameters (such as movement speeds and environmental volumes corresponding to different scenes) corresponding to each scene.
In detail, the preset scene parameter table may be pre-generated and stored in a preset storage area, and when the equipment scene needs to be queried, the scene parameter table may be called and queried, where the storage area includes, but is not limited to, a database, a blockchain node, and a network cache.
Specifically, the equipment scenes comprise scenes of driving, meeting, sports and the like.
In the embodiment of the present invention, the device scene corresponding to the motion speed and the environmental volume is obtained by querying from a preset scene parameter table, including:
constructing an index of the scene parameter table;
and retrieving from the scene parameter table according to the index, the motion speed and the environment volume to obtain a device scene corresponding to the motion speed and the environment volume.
In detail, an INDEX of the scene parameter table may be constructed using a CREATE INDEX function in the SQL library, which is used for retrieval of device scenes in the scene parameter table.
Illustratively, the INDEX of the scene parameter table may be constructed using the following CREATE INDEX function:
CREATE INDEX index-name;
ON table-name(column-name);
wherein index-name is the name of the index created, table-name is the table name of the scene parameter table, column-name is the name of the data column in the scene parameter table where the index needs to be created.
In the embodiment of the invention, the device scene corresponding to the motion speed and the environmental volume can be obtained by searching the index, the motion speed and the environmental volume in the scene parameter table, and the searched device scene can be determined to be the device scene corresponding to the motion speed and the environmental volume.
S4, counting historical average reply time length of the client under the equipment scene, and generating reply corpus according to the historical average reply time length and the equipment scene.
In the embodiment of the invention, the historical average reply time of the client under the equipment scene can be counted, so that the information of the expected reply time can be added when the automatic reply is carried out later, and the sender of the external message can know the state of the user of the client more clearly.
In detail, the historical average reply duration is an average value of reply durations of external messages when the client is in the equipment scene in the historical time.
In one practical application scenario, because scene parameters in different equipment scenarios may have smaller differences, if only the equipment scenario inquired from the scene parameter table is automatically replied to the external message, the accuracy of the automatic reply is lower, and therefore, the embodiment of the invention can generate corresponding reply corpus according to the inquired equipment scenario so as to improve the accuracy of the automatic reply.
In the embodiment of the present invention, referring to fig. 2, the generating a reply corpus according to the historical average reply duration and the device scene includes:
s21, judging whether the environment sound record contains the sound of the user of the client;
when the environment sound recording contains the sound of the user of the client, S22 is executed, and a sound recording section containing the user of the client is separated from the environment sound recording;
s23, recognizing the user speech speed of the recording section, and calculating the expected reply duration according to the user speech speed and the historical average reply duration;
s24, collecting the expected reply time length and the equipment scene into reply corpus;
and when the environment sound record does not contain the sound of the user of the client, executing S25, and collecting the historical average reply time length and the equipment scene into a reply corpus.
In detail, the sound spectrum analysis can be performed on the environment sound recording, and whether the sound of the user of the client side appears in the sound recording environment is further judged according to the analysis result.
Specifically, referring to fig. 3, the determining whether the sound of the user of the client appears in the environmental record includes:
s31, windowing each recording frame in the environment recording, and selecting one of the recording frames from the environment recording one by one;
s32, mapping the selected recording frame into a voice time domain graph, and counting the peak value, amplitude value and zero crossing rate of the voice time domain graph;
s33, calculating the frame energy of the target frame according to the peak value and the amplitude value, and collecting the frame energy and the zero crossing rate into the time domain characteristics of the selected recording frame;
s34, calculating a distance value between the time domain feature and a preset standard feature corresponding to a user of the client;
s35, judging whether the distance value between the recording frame and the standard feature in the environment recording is smaller than a preset distance threshold value;
if the distance value between the recording frame and the standard feature is smaller than the preset distance threshold value, executing S36 to determine that the environment recording does not contain the sound of the user of the client;
if the distance value between any recording frame and the standard feature in the environment recording is smaller than the preset distance threshold, S37 is executed, and the sound of the user of the client side is determined to be contained in the environment recording.
In the embodiment of the invention, the environmental record can be windowed in a framing way by a Hamming window mode to obtain a plurality of record frames, so that the local stability of the signal can be utilized, and the accuracy of analyzing the language learning is improved.
In detail, the pcolormesh function (preset first function) in the matplotlib. Pyplot packet can be utilized to map the recording frame into a voice time domain graph, and the peak value, the amplitude mean value and the zero crossing rate of the voice time domain graph are obtained through mathematical statistics, so that the frame energy is calculated according to the amplitude.
Illustratively, the frame energy may be calculated using the following energy algorithm:
wherein, energ y The frame energy of the (y) th recording frame is N, the total duration of the (y) th recording frame is x n And the amplitude of the y-th recording frame at the moment n is obtained.
Specifically, an algorithm with a distance value calculation function, such as a euclidean distance algorithm, a cosine distance algorithm, or the like, may be used to calculate a distance value between the time domain feature and a preset standard feature corresponding to the user of the client.
Further, the separating the recording segment including the user of the client from the environmental recording includes:
determining the position information of all recording frames with the distance value smaller than a preset distance threshold value in the environment recording;
and intercepting the environment sound recording according to the position information to obtain a sound recording section containing the user of the client.
In detail, when the distance value between the recording frame and the standard feature is smaller than the preset distance threshold value, the sound of the user of the client is determined not to be included in the environmental recording, so that the environmental recording can be intercepted according to the position information of all the recording frames with the distance value smaller than the preset distance threshold value in the environmental recording, and a recording segment including the user of the client is obtained.
In embodiments of the present invention, the user speech rate of the recording segment may be identified by ASR (Automatic Speech Recognition ) techniques.
In detail, the calculating the expected reply duration according to the user speech speed and the historical average reply duration includes:
calculating the expected reply duration according to the user speech speed and the historical average reply duration by using the following weight algorithm:
Y=α*A
wherein Y is the expected reply time length, A is the historical average reply time length, and alpha is the reciprocal of the user speech speed.
And further gathering the expected reply time length and the equipment scene together, and determining that the gathered data is a reply corpus.
In the embodiment of the invention, the expected reply time length and the equipment scene can be collected into a data set in the form of key value pairs. For example, the device scene is used as a primary key, the expected reply time length is used as a key value, a key value pair form of 'device scene-expected reply time length' is generated, and all key value pairs are collected into a data set, so that a reply corpus is obtained.
S5, replying the external message by using the replying corpus.
In the embodiment of the present invention, the replying the external message by using the reply corpus includes:
performing text thickening treatment on the reply corpus to obtain thickened text;
and replying the thickened text to the external message in a highlight color.
In detail, the external message can be more clearly replied by thickening and highlighting the text, so that the automatic replying effect is improved.
In the embodiment of the invention, the generated reply corpus can be utilized to reply the external message, and because the reply corpus comprises the equipment scene of the client and the reply duration of the user expecting the client, the better reply of the external message can be realized.
When the client receives the external message, the embodiment of the invention can judge the equipment scene of the client according to the movement speed and the environmental volume of the equipment of the client, and generate corresponding reply corpus according to the equipment scene and the historical average reply time length under the equipment scene to reply the external message, thereby realizing automatic reply by combining the current environmental state and the expected reply time of the user of the client, improving the information quantity of the reply content and improving the display effect of the user state during the automatic reply. Therefore, the message automatic reply method provided by the invention can solve the problem of poor display effect of the user state information when the message is automatically replied.
Fig. 4 is a functional block diagram of an automatic message reply device according to an embodiment of the present invention.
The message automatic reply device 100 of the present invention may be installed in an electronic apparatus. Depending on the implementation function, the message automatic reply device 100 may include a duration detection module 101, a data acquisition module 102, a scene query module 103, a corpus generation module 104, and a message reply module 105. The module of the invention, which may also be referred to as a unit, refers to a series of computer program segments, which are stored in the memory of the electronic device, capable of being executed by the processor of the electronic device and of performing a fixed function.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the duration detection module 101 is configured to monitor a duration of a client that receives an external message and does not reply;
the data obtaining module 102 is configured to obtain a movement speed of the device where the client is located and obtain an environmental volume of the device where the client is located when the time period is greater than a preset time period threshold;
the scene query module 103 is configured to query from a preset scene parameter table to obtain the motion speed and a device scene corresponding to the environmental volume;
the corpus generation module 104 is configured to count a historical average reply time length of the client in the device scene, and generate a reply corpus according to the historical average reply time length and the device scene;
the message reply module 105 is configured to reply to the external message by using the reply corpus.
In detail, each module in the message automatic reply device 100 in the embodiment of the present invention adopts the same technical means as the message automatic reply method described in fig. 1 to 3, and can produce the same technical effects, which are not described herein.
Fig. 5 is a schematic structural diagram of an electronic device for implementing an automatic message reply method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11, a communication bus 12 and a communication interface 13, and may further comprise a computer program, such as a message auto-reply program, stored in the memory 11 and executable on the processor 10.
The processor 10 may be formed by an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be formed by a plurality of integrated circuits packaged with the same function or different functions, including one or more central processing units (Central Processing unit, CPU), a microprocessor, a digital processing chip, a graphics processor, a combination of various control chips, and so on. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device and processes data by running or executing programs or modules (e.g., executing a message auto-reply program, etc.) stored in the memory 11, and calling data stored in the memory 11.
The memory 11 includes at least one type of readable storage medium including flash memory, a removable hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device, such as a mobile hard disk of the electronic device. The memory 11 may in other embodiments also be an external storage device of the electronic device, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device. The memory 11 may be used not only for storing application software installed in an electronic device and various data such as codes of a message automatic reply program, etc., but also for temporarily storing data that has been output or is to be output.
The communication bus 12 may be a peripheral component interconnect standard (peripheral component interconnect, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
The communication interface 13 is used for communication between the electronic device and other devices, including a network interface and a user interface. Optionally, the network interface may include a wired interface and/or a wireless interface (e.g., WI-FI interface, bluetooth interface, etc.), typically used to establish a communication connection between the electronic device and other electronic devices. The user interface may be a Display (Display), an input unit such as a Keyboard (Keyboard), or alternatively a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device and for displaying a visual user interface.
Fig. 5 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device may further include a power source (such as a battery) for supplying power to the respective components, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device may further include various sensors, bluetooth modules, wi-Fi modules, etc., which are not described herein.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The message auto-reply program stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
monitoring the duration of non-replying after the client receives the external message;
when the time length is greater than a preset time length threshold value, acquiring the movement speed of equipment where the client is located and acquiring the environmental volume of the equipment where the client is located;
inquiring from a preset scene parameter table to obtain the motion speed and the equipment scene corresponding to the environment volume;
counting historical average reply time length of the client under the equipment scene, and generating reply corpus according to the historical average reply time length and the equipment scene;
and replying the external message by using the reply corpus.
In particular, the specific implementation method of the above instructions by the processor 10 may refer to the description of the relevant steps in the corresponding embodiment of the drawings, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable storage medium may be volatile or nonvolatile. For example, the computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, can implement:
monitoring the duration of non-replying after the client receives the external message;
when the time length is greater than a preset time length threshold value, acquiring the movement speed of equipment where the client is located and acquiring the environmental volume of the equipment where the client is located;
inquiring from a preset scene parameter table to obtain the motion speed and the equipment scene corresponding to the environment volume;
counting historical average reply time length of the client under the equipment scene, and generating reply corpus according to the historical average reply time length and the equipment scene;
and replying the external message by using the reply corpus.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The Blockchain (Blockchain), which is essentially a decentralised database, is a string of data blocks that are generated by cryptographic means in association, each data block containing a batch of information of network transactions for verifying the validity of the information (anti-counterfeiting) and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms first, second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (8)
1. A method for automatically replying to a message, the method comprising:
monitoring the duration of non-replying after the client receives the external message by using the data embedded point preinstalled with the client;
when the time length is greater than a preset time length threshold value, acquiring the instantaneous speeds of the client at N time points in a preset time period, determining that the average value of N instantaneous speeds is the movement speed of equipment, N is a positive integer, acquiring the environmental record of the client in the preset time period, identifying the record volume of each record frame in the environmental record, and collecting the record volumes of all record frames as the environmental volume of the equipment;
inquiring from a preset scene parameter table to obtain the motion speed and the equipment scene corresponding to the environment volume;
counting historical average reply time length of the client under the equipment scene, judging whether the environment sound recording contains the sound of the user of the client, separating a sound recording section containing the user of the client from the environment sound recording when the environment sound recording contains the sound of the user of the client, calculating expected reply time length according to the user speech speed and the historical average reply time length, collecting the expected reply time length and the equipment scene into reply corpus, and collecting the historical average reply time length and the equipment scene into reply corpus when the environment sound recording does not contain the sound of the user of the client;
and replying the external message by using the reply corpus.
2. The method for automatically replying to a message according to claim 1, wherein the device scene corresponding to the motion speed and the environmental volume is obtained by querying from a preset scene parameter table, comprising:
constructing an index of the scene parameter table;
and retrieving from the scene parameter table according to the index, the motion speed and the environment volume to obtain a device scene corresponding to the motion speed and the environment volume.
3. The message auto-reply method of claim 1, wherein the determining whether the sound of the user of the client appears in the environmental sound recording comprises:
windowing each recording frame in the environment recording, and selecting one of the recording frames from the environment recording one by one;
mapping the selected recording frame into a voice time domain graph, and counting the peak value, amplitude value and zero crossing rate of the voice time domain graph;
calculating the frame energy of the selected recording frame according to the peak value and the amplitude value, and collecting the frame energy and the zero crossing rate into the time domain characteristics of the selected recording frame;
calculating a distance value of the time domain feature and a preset standard feature corresponding to a user of the client;
if the distance value between the recording frame and the standard feature is smaller than a preset distance threshold value, determining that the environment recording does not contain the sound of the user of the client;
and if the distance value between any recording frame and the standard feature in the environment recording is smaller than a preset distance threshold, determining that the environment recording contains the sound of the user of the client.
4. The message auto-reply method of claim 1 wherein separating from the environmental sound recordings a sound recording segment containing a user of the client comprises:
determining the position information of all recording frames with the distance value smaller than a preset distance threshold value in the environment recording;
and intercepting the environment sound recording according to the position information to obtain a sound recording section containing the user of the client.
5. The method for automatically replying to a message according to claim 1, wherein replying to the external message using the reply corpus comprises:
performing text thickening treatment on the reply corpus to obtain thickened text;
and replying the thickened text to the external message in a highlight color.
6. An automatic message reply device, the device comprising:
the time length detection module is used for monitoring the time length of non-replying after the client receives the external message by utilizing the data embedded point pre-installed with the client;
the data acquisition module is used for acquiring the instantaneous speeds of the client at N time points in a preset time period when the time period is longer than a preset time period threshold, determining that the average value of the N instantaneous speeds is the movement speed of the equipment, N is a positive integer, acquiring the environmental record of the client in the preset time period, identifying the record volume of each record frame in the environmental record, and collecting the record volumes of all record frames into the environmental volume of the equipment;
the scene query module is used for querying and obtaining the motion speed and the equipment scene corresponding to the environmental volume from a preset scene parameter table;
the corpus generation module is used for counting historical average reply time length of the client under the equipment scene, judging whether the environment sound record contains the sound of the user of the client, separating a sound recording section containing the user of the client from the environment sound record when the environment sound record contains the sound of the user of the client, calculating expected reply time length according to the user speech speed and the historical average reply time length, collecting the expected reply time length and the equipment scene into reply corpus, and collecting the historical average reply time length and the equipment scene into reply corpus when the environment sound record does not contain the sound of the user of the client;
and the message reply module is used for replying the external message by using the reply corpus.
7. An electronic device, the electronic device comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the message auto-reply method according to any one of claims 1 to 5.
8. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the message auto-reply method according to any one of claims 1 to 5.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111446619.1A CN114172856B (en) | 2021-11-30 | 2021-11-30 | Message automatic replying method, device, equipment and storage medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202111446619.1A CN114172856B (en) | 2021-11-30 | 2021-11-30 | Message automatic replying method, device, equipment and storage medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN114172856A CN114172856A (en) | 2022-03-11 |
| CN114172856B true CN114172856B (en) | 2023-05-30 |
Family
ID=80481790
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202111446619.1A Active CN114172856B (en) | 2021-11-30 | 2021-11-30 | Message automatic replying method, device, equipment and storage medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN114172856B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN115102915B (en) * | 2022-06-21 | 2023-07-14 | 元心信息科技集团有限公司 | Information interaction method, device, electronic equipment and computer readable storage medium |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004086554A (en) * | 2002-08-27 | 2004-03-18 | Matsushita Electric Ind Co Ltd | Mobile terminal device and tracking system for the mobile terminal device |
| CN113707173A (en) * | 2021-08-30 | 2021-11-26 | 平安科技(深圳)有限公司 | Voice separation method, device and equipment based on audio segmentation and storage medium |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004094744A (en) * | 2002-09-02 | 2004-03-25 | Mazda Motor Corp | Sales support server, sales support system, sales support method and sales support program |
| JP2004206627A (en) * | 2002-12-26 | 2004-07-22 | Hitachi Kokusai Electric Inc | Mobile terminal |
| JP2010239242A (en) * | 2009-03-30 | 2010-10-21 | Nec Corp | Automatic answering machine, automatic answering method and automatic answering program |
| TW201042984A (en) * | 2009-05-22 | 2010-12-01 | Hon Hai Prec Ind Co Ltd | Method for automatically sending messages by a mobile phone |
| CN202488539U (en) * | 2012-03-30 | 2012-10-10 | 深圳市金立通信设备有限公司 | A smart phone scene mode automatic detection system |
| US10819662B2 (en) * | 2015-03-26 | 2020-10-27 | Airwatch, Llc | Detecting automatic reply conditions |
| US10192551B2 (en) * | 2016-08-30 | 2019-01-29 | Google Llc | Using textual input and user state information to generate reply content to present in response to the textual input |
| KR102683651B1 (en) * | 2017-01-17 | 2024-07-11 | 삼성전자주식회사 | Method for Producing the Message and the Wearable Electronic Device supporting the same |
| CN108882142A (en) * | 2018-05-25 | 2018-11-23 | 维沃移动通信有限公司 | Reminder message sending method and mobile terminal |
| CN108513018A (en) * | 2018-06-08 | 2018-09-07 | 出门问问信息科技有限公司 | Automatically replying incoming call method, apparatus and terminal side equipment |
| CN113039758B (en) * | 2018-12-19 | 2023-01-10 | 深圳市欢太科技有限公司 | Method and related device for automatically replying information |
| CN110784393A (en) * | 2019-10-25 | 2020-02-11 | 上海连尚网络科技有限公司 | Automatic message reply method and device |
| CN113037932B (en) * | 2021-02-26 | 2022-09-23 | 北京百度网讯科技有限公司 | Reply message generating method, apparatus, electronic device and storage medium |
-
2021
- 2021-11-30 CN CN202111446619.1A patent/CN114172856B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2004086554A (en) * | 2002-08-27 | 2004-03-18 | Matsushita Electric Ind Co Ltd | Mobile terminal device and tracking system for the mobile terminal device |
| CN113707173A (en) * | 2021-08-30 | 2021-11-26 | 平安科技(深圳)有限公司 | Voice separation method, device and equipment based on audio segmentation and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN114172856A (en) | 2022-03-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111723727B (en) | Cloud monitoring method and device based on edge computing, electronic equipment and storage medium | |
| CN113918361B (en) | Terminal control method, device, equipment and medium based on Internet of Things rule engine | |
| CN109587008B (en) | Method, device and storage medium for detecting abnormal flow data | |
| CN112560453B (en) | Voice information verification method and device, electronic equipment and medium | |
| CN106815125A (en) | A kind of log audit method and platform | |
| CN112463422A (en) | Internet of things fault operation and maintenance method and device, computer equipment and storage medium | |
| US20230410222A1 (en) | Information processing apparatus, control method, and program | |
| CN111460810A (en) | Crowd-sourced task spot check method and device, computer equipment and storage medium | |
| CN114781832A (en) | Course recommendation method and device, electronic equipment and storage medium | |
| CN111930963A (en) | Knowledge graph generation method and device, electronic equipment and storage medium | |
| CN113704616B (en) | Information pushing method and device, electronic equipment and readable storage medium | |
| CN112784102A (en) | Video retrieval method and device and electronic equipment | |
| CN110222790B (en) | User identity identification method and device and server | |
| CN112948223A (en) | Method and device for monitoring operation condition | |
| CN114172856B (en) | Message automatic replying method, device, equipment and storage medium | |
| CN114373209A (en) | Video-based face recognition method, device, electronic device and storage medium | |
| CN105978722A (en) | User attribute mining method and device | |
| CN117573491B (en) | A method, device, equipment and storage medium for locating performance bottlenecks | |
| CN112633170A (en) | Communication optimization method, device, equipment and medium | |
| CN112101191A (en) | Expression recognition method, device, equipment and medium based on frame attention network | |
| CN117375958A (en) | Web application system identification method and device and readable storage medium | |
| CN115119197B (en) | Wireless network risk analysis method, device, equipment and medium based on big data | |
| CN112650595A (en) | Communication content processing method and related device | |
| CN114268559B (en) | Directional network detection method, device, equipment and medium based on TF-IDF algorithm | |
| CN116662904A (en) | Method, device, computer equipment and medium for detecting variation of data type |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |