Detailed Description
For the purpose of promoting an understanding of the principles and advantages of the disclosure, reference will now be made in detail to the drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The terminology used in the embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure of embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two.
It should be understood that the term "and/or" as used herein is merely an association relationship describing the associated object, and means that there may be three relationships, e.g., a and/or B, and that there may be three cases where a exists alone, while a and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present disclosure, these descriptions should not be limited to these terms. These terms are only used to distinguish one from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of embodiments of the present disclosure.
The words "if", as used herein, may be interpreted as "at" or "when" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of additional like elements in a commodity or device comprising the element.
Alternative embodiments of the present disclosure are described in detail below with reference to the drawings.
Example 1
The embodiment provided by the disclosure is an embodiment of a method for retrieving recorded broadcast video.
Embodiments of the present disclosure are described in detail below in conjunction with fig. 1.
When video is recorded and broadcast, the embodiment of the disclosure provides two cameras, one camera is used for collecting video of a teaching teacher in the panoramic intelligent blackboard, and the other camera is used for collecting courseware video of teaching in the panoramic intelligent blackboard.
The image video and the courseware video are synchronously acquired. The synchronously collected video and courseware video can be understood as the video which is collected after the video and courseware video are time-aligned based on the same clock. For example, at a time point T1, a panoramic intelligent blackboard plays a video image A1 of an image video, plays a video image B1 of a courseware video, a camera N1 is used for collecting the image video, a camera N2 is used for collecting the courseware video, when the camera N1 and the camera N2 are based on the same clock pair, the camera N1 collects the video image A2 at the time point T1, the time stamp marked at the video image A2 is TT1, the camera N2 collects the video image B2 at the time point T1, the time stamp marked at the video image B2 is TT1, at the moment, the video image A2 is consistent with the video image A1, and the video image B2 is consistent with the video image B1. I.e. the points in time of the video image markers that are synchronously acquired at the same point in time, maintain a consistency relationship. The method and the device avoid abnormal retrieval caused by time dislocation of the multiple video marks when retrieving the multiple video information, and ensure the accuracy of the retrieved information by synchronously acquiring the video.
The most widely used courseware includes electronic presentations (PPT for short, in english, powerPoint). The PPT can demonstrate the prefabricated teaching content page by page in a slide show mode.
In courseware video, each page of a courseware is called a courseware page. When each courseware page is displayed, the content of the courseware page is kept unchanged until the page is turned over, so that the video clip displaying the same page of content in the courseware video is called a courseware clip according to the embodiment of the disclosure. And video images in a courseware clip are referred to as courseware clip images. And displaying the same content of all the courseware fragment images in the courseware fragment.
If the time period of a courseware segment in the courseware video is Ta, the video segment in the time period Ta in the synchronously acquired video is called an image segment, and the audio segment in the image segment is called an audio segment.
The embodiment of the disclosure provides a retrieval method based on the recorded video, so that a lecture teacher or a lecture student can interact with the recorded video.
Step S101, acquiring key phrases in sentences input by teaching teachers or listening students.
The sentence can be a string of characters input by an input method or a section of voice input by a microphone. It will be appreciated by those skilled in the art that the input may be made in any practicable manner.
For example, the lecture course is to explain Newton's second law of motion, and the students in the lecture input a series of characters through an input method.
The keyword group comprises a plurality of keywords. The keywords refer to a plurality of core words in the sentence, and the core semantics of the sentence can be represented through the plurality of core words.
For example, continuing with the above example, the keywords are "Newton", "second", "motion", "law" and "formula", which form the keyword group.
Step S102, grading is carried out based on keywords in the keyword groups, and a first-level keyword group and a second-level keyword group are obtained.
According to the embodiment of the disclosure, the search is divided into two levels according to the characteristics of recorded video, so that video positions required by teaching teachers or students can be accurately found. In order to be able to cooperate with the secondary search, the keyword group is divided into a first-level keyword group and a second-level keyword group before the search.
In some embodiments, the grading is performed based on keywords in the keyword groups, and a first-level keyword group and a second-level keyword group are obtained, which includes the following steps:
And step S102-1, searching is performed in the grade data set based on the keywords in the keyword group, and grade marks of the keywords are obtained.
The hierarchical data set includes correspondence of keywords and hierarchical labels.
The level indicia includes a primary indicia or a secondary indicia.
The step of searching in the level data set based on the keywords in the keyword group to obtain the level marks of the keywords can be understood as obtaining each keyword from the keyword group, and respectively searching the keywords in the level data set for each keyword to obtain the level marks corresponding to the matched keywords. For example, the key words are "Newton", "second", "motion", "law" and "formula". The record in the hierarchical dataset is as follows:
| Keyword(s) |
Grade marking |
| Newton's medicine |
1 |
| Second one |
1 |
| Exercise machine |
1 |
| Law of law |
1 |
| Formula (VI) |
2 |
Wherein, the grade mark is 1 and represents a first grade mark, and the grade mark is 2 and represents a second grade mark.
In step S102-2, keywords with first level marks are combined to form a first level keyword group, and keywords with second level marks are combined to form a second level keyword group.
For example, continuing the above example, the first level key word groups generated by the combination are "Newton", "second", "motion" and "law", and the second level key word groups generated by the combination are "formula".
Step S103, searching is carried out in the searching courseware information set comprising a plurality of first-class key phrases based on the first-class key phrases, and image fragment identifiers corresponding to the first-class key phrases matched with the first-class key phrases are obtained.
The first type of key phrase is a key phrase obtained from courseware fragments, and the courseware fragments are video fragments obtained by dividing courseware pages in courseware videos as units.
In courseware video, each page of a courseware is called a courseware page. When each courseware page is displayed, the content of the courseware page is kept unchanged until the page is turned. And video images in a courseware clip are referred to as courseware clip images. It is understood that all courseware segment images in the courseware segment are identical.
The courseware information set comprises a corresponding relation between the first category of key word groups and the image fragment identification.
The method aims at finding out a first type of key word group with matching degree meeting the preset matching degree condition from the courseware information set, so as to obtain an image fragment identifier corresponding to the first type of key word group in the courseware information set.
In the embodiment of the disclosure, each image segment has a corresponding image information set. The image segment identifier is used for indicating an image information set. For example, the video clip identifier is the name of the video information set, page_1_table, page_2_table.
Because courseware fragments are mainly used for displaying important contents and knowledge architecture of teaching, the first category of key phrases comprises summary information of the teaching contents. The method realizes rough positioning of summary information of the teaching contents based on the problems.
The first-level keyword group obtains the image fragment identification in a mode of matching with the first-class keyword group in the courseware information set, so that the adaptability of the search is improved, and the effectiveness of the search is ensured.
In some embodiments, the searching is performed in a searching courseware information set including a plurality of first-class keyword groups based on the first-class keyword groups, and an image fragment identifier corresponding to the first-class keyword group matched with the first-class keyword groups is obtained, including the following steps:
Step S103-1, matching the first-level keyword group with the first-class keyword groups in the courseware information set, and obtaining a first matching result of each first-class keyword group.
In an embodiment of the disclosure, the first matching result includes a first matching degree.
The first matching degree refers to the ratio of the first matching number to the number of first type keywords in the first type keyword group.
The first matching number refers to the number of the same words of the first level keywords in the first level keyword group and the first type keywords in the first type keyword group.
For example, the first-level keyword groups are Newton, second, motion and law, the first-level keyword group A exists in the courseware information set and is Newton, second, motion and law, and the first matching degree is 100% because the first-level keyword in the first-level keyword group A is the same as the first-level keyword of the first-level keyword group, the first matching number is 4, and the number of the first-level keyword groups in the first-level keyword group A is also 4.
Step S103-2, determining that the first keyword group with the first matching result meeting the preset first matching condition is the first keyword group matched with the first level keyword group.
For example, the first matching condition is preset, that is, the first matching result is greater than or equal to 80%, and continuing the above example, the first type keyword group A is determined to be the matched first type keyword group.
And step S103-3, searching the courseware information set based on the first class keyword group matched with the first class keyword group, and obtaining a corresponding image fragment identifier.
Because the courseware information set comprises the corresponding relation between the first-class keyword group and the image fragment identifier, the corresponding image fragment identifier can be retrieved from the courseware information set through the first-class keyword group matched with the first-class keyword group. For example, continuing the above example, in the courseware information set, the first category keyword group a matched with the first category keyword group has a correspondence with the image fragment identifier page_1_table.
It will be appreciated by those skilled in the art that the method of step 103 may be implemented in any practicable manner.
Step S104, searching is carried out in the image information set indicated by the image fragment identification and comprising a plurality of second category keyword groups based on the second category keyword groups, and matching keyword time points corresponding to the second category keyword groups matched with the second category keyword groups are obtained.
The second category of key word groups is a key word group obtained based on audio frequency fragments in image fragments, and the image fragments are video fragments obtained by dividing image videos based on time periods corresponding to the courseware fragments.
The audio in the video clip is referred to as an audio clip.
The time period corresponding to the courseware segment refers to a time period from a starting time point to an ending time point of the courseware segment. If the time period of a courseware segment in the courseware video is Ta, the video segment in the time period Ta in the synchronously acquired video is called as a video segment. It can be understood that the recording time of the courseware section and the image section which are synchronously collected is the same, and the generated information has an association relationship.
The image information set comprises the corresponding relation between the second category of key word groups and key time points.
The key time point refers to the starting time point of sentence audio including the second category keyword in the second category keyword group.
Optionally, the second category of keyword groups is keyword groups obtained based on the keyword sentence audio of the audio clip in the image clip. The key time point is the start time point of the key sentence audio.
The key sentence audio includes sentence audio that involves teaching emphasis in the audio clip.
The second category of keywords comprises detailed information of the teaching content because the image segments mainly explain the teaching content displayed by the images of the courseware segments in detail. The method realizes accurate positioning of the detailed information of the teaching content based on the problems.
The second-level keyword group obtains the key time point of playing the video screen by matching with the second-type keyword group in the image information set, thereby improving the adaptability of the search and ensuring the effectiveness of the search.
In some embodiments, the searching is performed in the image information set indicated by the image segment identifier and including a plurality of second category keywords based on the second category keywords, so as to obtain a matching keyword time point corresponding to the second category keywords matched with the second category keywords, which includes the following steps:
Step S104-1, the second-level keyword groups are matched with the second-class keyword groups in the image information set, and a second matching result of each second-class keyword group is obtained.
In an embodiment of the disclosure, the second matching result includes a second matching degree.
The second matching degree refers to the ratio of the second matching number to the number of the second category keywords in the second category keyword group.
The second matching number refers to the number of the same words of the second level keywords in the second level keyword group and the second type keywords in the second type keyword group.
For example, the second level keyword group is a formula, the second type keyword group B exists in the courseware information set as a formula, and the second matching number is 1 because the second type keyword in the second type keyword group B is the same as the second level keyword of the second level keyword group, and the second matching degree is 100% because the number of the second type keywords in the second type keyword group B is also 1.
Step S104-2, determining that the first category of keyword groups with the second matching result meeting the preset second matching condition are second category of keyword groups matched with the second category of keyword groups.
For example, the second matching condition is preset to be that the second matching result is greater than or equal to 80%, and continuing the above example, the second class keyword group B is determined to be the second class keyword group matched with the second class keyword group.
And step S104-3, searching the image information set based on the second category keyword group matched with the second category keyword group, and obtaining a corresponding matched keyword time point.
Because the image information set includes the correspondence between the second category of keyword groups and the key time points, the corresponding key time points (i.e. the matching key time points) can be retrieved from the image information set by the second category of keyword groups matched with the second category of keyword groups. For example, continuing the above example, in the image information set, the second category keyword group B matched with the second category keyword group has a correspondence relationship with the keyword time point "6 minutes 30 seconds".
It will be appreciated by those skilled in the art that the method of step 104 may be implemented in any practicable manner.
Step S105, starting playing the video and/or the courseware video based on the matching key time point.
According to the embodiment of the disclosure, key phrases in sentences input by teaching teachers or listening students are classified into first-level key phrases and second-level key phrases according to grades, and then corresponding two-level information sets are searched through the first-level key phrases and the second-level key phrases respectively, and rough positioning is performed to detailed positioning, so that play time points related to the input sentences in recorded video are found. The accuracy and consistency of video recording and broadcasting retrieval are ensured, and interaction between teaching teachers and students listening to classes and video recording and broadcasting is realized.
Example 2
Since the embodiments of the present disclosure are further optimized based on the above embodiments, explanations based on the same method composition and the same meaning of names are the same as those of the above embodiments, and are not repeated here.
The embodiment of the disclosure provides a method for generating a courseware information set, as shown in fig. 2, the method further comprises the following steps:
Step S201, each courseware segment is acquired in the courseware video.
The embodiment of the disclosure refers to a video clip in a courseware video, which displays the same courseware page content, as a courseware clip.
Step S202, generating an image fragment identifier indicating the image information set according to the courseware fragment.
In the embodiment of the disclosure, each image segment has a corresponding image information set. The image segment identifier is used for indicating an image information set. For example, the video clip identifier is the name of the video information set, page_1_table, page_2_table.
Step S203, obtaining corresponding first class keyword groups based on each courseware segment.
Because all courseware fragment images in the courseware fragments are the same, the obtaining of the corresponding first type of key phrase based on each courseware fragment can be understood as obtaining the corresponding first type of key phrase based on any courseware fragment image in the courseware fragment.
And carrying out text semantic analysis on courseware fragment images in each courseware fragment based on the text semantic analysis model to obtain a corresponding first type of key phrase.
The text semantic analysis model takes the historical courseware fragment images as training samples, and generates an analysis model after performing semantic recognition training on the text in the historical courseware fragment images. The text semantic analysis model can analyze the text in the courseware fragment image, and extract first-class keywords from the text of the courseware fragment image according to the text semantics to form first-class keyword groups.
And the text semantic analysis is carried out on the courseware fragment images through the text semantic analysis model, so that the accuracy of text semantic analysis in the images is improved.
Step S204, generating the courseware information set based on each first type of keyword group and the image fragment identification corresponding to the first type of keyword group.
Because the first type key phrase and the image fragment identifier in the courseware information set have a corresponding relation, all the first type key phrases and the image fragment identifiers corresponding to the first type key phrases obtained in the courseware video are stored, and the courseware information set is generated.
Example 3
Since the embodiments of the present disclosure are further optimized based on the above embodiments, explanations based on the same method composition and the same meaning of names are the same as those of the above embodiments, and are not repeated here.
The embodiment of the disclosure provides a method for generating an image information set, as shown in fig. 3, the method further comprises the following steps:
In step S301, in the video, the video clip corresponding to the courseware clip is obtained based on each time period.
The time period refers to a time period from a starting time point to an ending time point of the courseware segment.
Step S302, a plurality of key sentence audios are obtained based on the audio clips in each image clip.
The key sentence audio includes sentence audio that involves teaching emphasis in the audio clip.
Step S303, obtaining an original keyword group and a key time point of the occurrence of the keyword audio based on each keyword audio.
And carrying out audio semantic analysis on the key sentence audio based on the audio semantic analysis model to obtain an original key phrase.
The audio semantic analysis model is an analysis model generated by taking historical key sentence audio as a training sample, carrying out semantic recognition training on sentence audio of a speaker in the historical key sentence audio. The audio semantic analysis model can analyze the key sentence audio in the audio fragment and extract the original key words, namely the original key word groups, from the key sentence audio.
According to the embodiment of the disclosure, the audio semantic analysis is carried out on the key sentence audio through the audio semantic analysis model, so that the accuracy of the audio semantic analysis is improved.
Step S304, removing the first type key words in the first type key words of the corresponding courseware fragments from the original key words of each key sentence audio frequency, and obtaining the second type key words.
The original keyword group belonging to the same keyword sentence audio frequency is obtained, and the first keyword group of the corresponding courseware fragment in the original keyword group is deleted, so that the repeated information of secondary search is reduced, and the search efficiency is improved.
In step S305, an image information set corresponding to the image segments is generated based on the respective second category keyword groups and the key time points corresponding to the second category keyword groups of each image segment.
Because the second category keyword groups in the image information set have a corresponding relation with the key time points, all the second category keyword groups obtained in the image video and the key time points corresponding to the second category keyword groups are stored according to the corresponding image information sets, so that the image information sets corresponding to the image fragments are generated. That is, each image segment has a corresponding set of image information.
Example 4
Since the embodiments of the present disclosure are further optimized based on the above embodiments, explanations based on the same method composition and the same meaning of names are the same as those of the above embodiments, and are not repeated here.
The disclosed embodiments provide a method of generating a hierarchical data set. As shown in fig. 4, the specific method further includes the following steps:
Step S401a, obtaining a first category keyword in a first category keyword group based on the courseware information set.
Based on the embodiment, the first kind of keywords are acquired from the courseware information set.
Step S402a, storing the first type keyword in the level data set, and using a first level tag as a level tag of the first type keyword.
For example, the first keywords such as Newton, second, motion and law are obtained from the courseware information set, and the level data set is stored and recorded as follows:
| Keyword(s) |
Grade marking |
| Newton's medicine |
1 |
| Second one |
1 |
| Exercise machine |
1 |
| Law of law |
1 |
Wherein the level flag is 1, which indicates a first level flag.
The specific method further comprises the following steps:
Step S401b, obtaining a second category keyword in the second category keyword group based on the image information set.
The embodiment of the present disclosure obtains the second category keywords from the image information set on the basis of the above embodiment.
Step S402b, storing the second category keywords in the level data set, and adopting second level marks as level marks of the second category keywords.
For example, the second category keyword "formula" is obtained in the image information set, and then the following is stored in the level data set:
| Keyword(s) |
Grade marking |
| Formula (VI) |
2 |
Wherein the level flag is 2, which indicates a secondary flag.
Example 5
The disclosure further provides an embodiment of a device adapted to the above embodiment, which is configured to implement the method steps described in the above embodiment, and the explanation based on the meaning of the same names is the same as that of the above embodiment, which has the same technical effects as those of the above embodiment, and is not repeated herein.
As shown in fig. 5, the present disclosure provides a retrieval device 500 for recording and playing video, including:
An obtaining unit 501, configured to obtain a keyword group in a sentence input by a lecture teacher or a lecture student;
The grading unit 502 is configured to perform grading based on keywords in the keyword groups, and obtain a first-level keyword group and a second-level keyword group;
A first level search unit 503, configured to perform search in a set of search courseware information including a plurality of first level keyword groups, and obtain an image segment identifier corresponding to a first type keyword group matched with the first level keyword group, where the first level keyword group is a keyword group obtained from a courseware segment, and the courseware segment is a video segment obtained by dividing a courseware video by taking a courseware page in the courseware video as a unit;
a second level retrieval unit 504, configured to perform, based on the second level keyword group, a retrieval in an image information set indicated by the image segment identifier and including a plurality of second type keyword groups, and obtain a matching keyword time point corresponding to the second type keyword group that is matched with the second level keyword group, where the second type keyword group is a keyword group obtained based on an audio segment in an image segment, and the image segment is a video segment obtained by dividing an image video based on a time period corresponding to the courseware segment;
and the playing unit 505 is configured to start playing the video and/or the courseware video based on the matching key time point.
Optionally, the primary search unit 503 includes:
The first matching subunit is used for matching the first-level keyword group with first-class keyword groups in the courseware information set to obtain a first matching result of each first-class keyword group;
The first determining subunit is used for determining that the first keyword group with the first matching result meeting a preset first matching condition is the first keyword group matched with the first level keyword group;
and the first retrieval subunit is used for retrieving the courseware information set based on the first class keyword group matched with the first class keyword group and obtaining a corresponding image fragment identifier.
Optionally, the secondary search unit 504 includes:
The second matching subunit is used for matching the second-level keyword group with the second-class keyword groups in the image information set to obtain a second matching result of each second-class keyword group;
the second determining subunit is used for determining that the first category of keyword groups, of which the second matching result meets a preset second matching condition, are second category of keyword groups matched with the second category of keyword groups;
and the second retrieval subunit is used for retrieving the image information set based on the second category keyword group matched with the second category keyword group and obtaining a corresponding matched keyword time point.
Optionally, the grading unit 502 includes:
A third retrieval subunit, configured to perform retrieval in a level dataset based on a keyword in the keyword group, and obtain a level tag of the keyword;
And the combining subunit is used for combining the keywords with the first-level marks to form a first-level keyword group and combining the keywords with the second-level marks to form a second-level keyword group.
Optionally, the device further comprises a grading unit;
The grading unit includes:
the first obtaining subunit is used for obtaining first-type keywords in the first-type keyword groups based on the courseware information set;
The first storing subunit is used for storing the first type keywords into the grade data set, and adopting a first grade mark as the grade mark of the first type keywords;
And/or the number of the groups of groups,
The second acquisition subunit is used for acquiring second category keywords in the second category keyword groups based on the image information set;
And the second storing subunit is used for storing the second type keywords into the grade data set, and adopting second grade marks as grade marks of the second type keywords.
Optionally, the apparatus further comprises a first generation unit;
the first generation unit includes:
The courseware segment acquisition subunit is used for acquiring each courseware segment in the courseware video;
The identification generation subunit is used for generating an image fragment identification indicating the image information set according to the courseware fragment;
The first obtaining subunit is used for obtaining corresponding first class key word groups based on each courseware segment;
the first generation subunit is used for generating the courseware information set based on each first type of keyword group and the image fragment identification corresponding to the first type of keyword group.
Optionally, the apparatus further comprises a second generating unit;
the second generating unit includes:
the image segment obtaining subunit is used for obtaining image segments of corresponding courseware segments based on each time period in the image video;
an audio acquisition subunit, configured to acquire a plurality of key sentence audios based on the audio clips in each image clip;
The original information acquisition subunit is used for acquiring an original keyword group and a key time point of occurrence of the keyword audio based on each keyword audio;
the second obtaining subunit is used for removing the first type key words in the first type key words of the corresponding courseware fragments from the original key words of each key sentence audio frequency to obtain a second type key word group;
And the second generation subunit is used for generating an image information set of the corresponding image fragment based on each second category keyword group of each image fragment and the key time point of the corresponding second category keyword group.
According to the embodiment of the disclosure, key phrases in sentences input by teaching teachers or listening students are classified into first-level key phrases and second-level key phrases according to grades, and then corresponding two-level information sets are searched through the first-level key phrases and the second-level key phrases respectively, and rough positioning is performed to detailed positioning, so that play time points related to the input sentences in recorded video are found. The accuracy and consistency of video recording and broadcasting retrieval are ensured, and interaction between teaching teachers and students listening to classes and video recording and broadcasting is realized.
Example 6
As shown in fig. 6, the present embodiment provides an electronic device, which includes at least one processor, and a memory communicatively connected to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method steps described in the above embodiments.
Example 7
The disclosed embodiments provide a non-transitory computer storage medium storing computer executable instructions that perform the method steps described in the embodiments above.
Example 8
Referring now to fig. 6, a schematic diagram of an electronic device suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data required for the operation of the electronic apparatus are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, devices may be connected to I/O interface 605 including input devices 606, including for example, touch screens, touch pads, keyboards, mice, cameras, microphones, accelerometers, gyroscopes, etc., output devices 607, including for example, liquid Crystal Displays (LCDs), speakers, vibrators, etc., storage devices 608, including for example, magnetic tape, hard disk, etc., and communication devices 609. The communication means 609 may allow the electronic device to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be included in the electronic device or may exist alone without being incorporated into the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.