Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Embodiments of the present invention provide an information display method, an information display apparatus, an electronic device, and a storage medium, so as to solve the problem in the related art that the efficiency of acquiring a target image or video is low.
The information display method provided by the embodiment of the invention can be applied to at least the following 2 application scenarios, which are respectively explained as follows:
the information display method provided by the embodiment of the invention can be applied to scenes for searching required videos and images in an album application program.
As shown in fig. 1, the user may select a target tag among the shooting attributes, weather, a place where a shooting file is acquired, mood index, and people provided in the album application; when the target label is a weather label, acquiring second-level labels 'sunny', 'cloudy' and 'cloudy' corresponding to the weather label; then, the target multimedia files (such as videos, images and the like) in the album application program are classified and displayed according to the display mode of the second-level labels.
Therefore, the user can check the target multimedia files in groups according to the first-level labels and the second-level labels, interaction between the user and the electronic equipment is greatly enriched, the user can quickly screen the target multimedia files required by the user according to the second-level labels corresponding to the first-level labels, and screening efficiency is improved.
In addition, the embodiment of the invention can also be applied to a scene for searching the required target audio in the audio application program.
The user can select a target tag from shooting attributes, weather, a place for acquiring a shooting file, mood indexes and people provided in the audio application program; when the target label is a place label for acquiring the shooting file, acquiring a second-level label of the city name A and a second-level label of the country name B corresponding to the place for acquiring the shooting file; and then, according to the display modes of the second-level label of the name of the city A and the second-level label of the name of the country B, the audio in the audio application program is displayed in a classified mode.
Therefore, the user can view the audios in groups according to the second-level tags, so that the user can recall audios heard at a certain place, and can inquire required audios even if the user forgets a specific audio name, and accordingly, the user can be effectively prompted to acquire the required audios.
Here, the method provided by the embodiment of the present invention may be applied to any scene in which information required by a user needs to be filtered, in addition to the above-mentioned related scenes. The method provided by the embodiment of the invention can effectively improve the efficiency of information screening.
It should be noted that the electronic device in the embodiment of the present invention may include at least one of the following: the mobile phone, the tablet personal computer, the intelligent wearable device and the like have display functions.
Based on the application scenario, the following describes in detail an information display method provided by an embodiment of the present invention.
Fig. 2 is a flowchart of an information display method according to an embodiment of the present invention.
As shown in fig. 2, the information display method may specifically include steps 210 to 230, which are specifically as follows:
step 210, receiving a first input of a user to a first-level tag of the N tags.
Wherein N is a positive integer. Additionally, in embodiments of the present invention, the type of the first level tag comprises at least one of: shooting attributes, weather, a place where a shooting file is acquired, mood index, and people. Further, the shooting attributes include at least one of: and acquiring the time for shooting the file, the name of the file, the size of the file and the duration of the file.
For example, the following steps are carried out: the time for acquiring the shooting attribute of the shooting file can be the time for shooting the file by the user or the time for downloading the file by the user; the file name may be a name of the file acquired by the electronic device in a preset manner, or may be a name of the file name input by the user. The weather may be the weather of the current location when the user shoots the file, or the weather in the content of the file. The mood index can be set based on the mood of the user when shooting the file.
Step 220, in response to the first input, displaying the target multimedia file in a display mode matched with the second-level tag corresponding to the first-level tag.
Wherein each of the first level label and the second level label indicates a display mode, respectively.
For example: the size of the file may correspond to three second-level tags, where the three second-level tags are respectively: less than or equal to 10 megabits, 10 megabits to 100 megabits, and 101 megabits to 1 ten million; the weather may correspond to four second level tags being sunny, cloudy, and rainy.
Based on this, in one possible embodiment, in response to a first input, a second level label is displayed;
receiving a second input of a user to a target second-level label in the second-level labels;
and responding to the second input, and displaying the target multimedia file according to the display mode matched with the target second-level label.
Further, in another possible embodiment, when the first level tag is a person tag; before displaying the second level label, the method further comprises the following steps:
receiving a third input of a target face image in the at least one face image from a user;
displaying at least one second level label corresponding to the first level label, comprising:
displaying at least one second-level label corresponding to the target face image; wherein each second level tag corresponds to a person attribute.
Therefore, the user can select the target face image in the video file, retrieve and segment the file in the application program according to the target face image, and use the object distraction and the difficult expression in the character attributes as the second-level label, so that the user can quickly screen and view the related target multimedia file.
It should be noted that, in order to facilitate viewing by a user, the method provided in the embodiment of the present invention may display the target multimedia files in a classified manner according to the length sequence of the durations of the target multimedia files. Therefore, in the embodiment of the invention, a plurality of second-level labels (for example, whether expressions of facial images corresponding to the character labels are happy or difficult) corresponding to the first-level labels are acquired by responding to the first operation that a user selects the first-level labels (for example, the character labels) in a target application program (for example, an album), and the target multimedia files in the target application program are displayed based on the display mode of each second-level label. Therefore, when a user wants to view a required target file (such as an image or a video), the target file corresponding to the plurality of second-level tags of the first-level tags can be displayed only by selecting the first-level tags, and the user does not need to find the target file from the beginning to the end in a huge amount of photos or videos. Therefore, the method provided by the embodiment of the invention can quickly screen and display the target file required by the user, and improve the efficiency of acquiring the target file while ensuring the accuracy of screening the target file.
In order to facilitate understanding of the method provided by the embodiment of the present invention, based on the above contents, the following takes an example that a user acquires a target multimedia file in an album application, and displays the target multimedia file according to a display mode of a second-level tag corresponding to a shooting attribute and weather of the type of the first-level tag, a location where the file is acquired, and a mood index, and illustrates an information display method provided by the embodiment of the present invention.
Fig. 3 is a flowchart of an information display method based on an album application according to an embodiment of the present invention.
As shown in fig. 3, the method may include steps 310-350, as follows:
when the user opens the album application, the electronic device displays the corresponding album interface. For convenience of description, the default electronic device interface in this embodiment is shown in fig. 4, and multimedia files (e.g., video information and image information) sorted according to time are displayed in the album interface.
And 310, receiving preset operation of a user for a video control in the album interface.
As shown in fig. 4, a preset operation of a user for a video control of a first area 40 in the album interface is received.
In step 320, in response to a preset operation, screening the video information in the album application program, and displaying the video information according to the type of the first-level tag (refer to fig. 5). Here, the type of the first level tag may be displayed for the acquisition shooting file time in the shooting attributes.
Further, responding to preset operation, and sequentially acquiring the storage path and the acquisition time of each piece of video information; reading a video thumbnail (which may be a certain frame in the video information, an icon for displaying the video information, and generally a first frame of the video) corresponding to each video information according to the storage path of the video information; the video thumbnails are displayed at corresponding positions in a time line separated manner, which can be specifically shown in fig. 5.
Step 330, receiving a first input of a user to a first level tag of the N tags.
Here, a plurality of first level tags may be directly displayed in the display interface shown in fig. 5; alternatively, a user operation of the grouping control 50 in the display interface shown in fig. 5 is received, and a plurality of first-level labels are displayed in the display interface.
Wherein the type of the first level tag comprises at least one of: shooting attributes, weather, a place where a shooting file is acquired, mood index, and people. Further, the shooting attributes include at least one of: the time for shooting the file (i.e. as shown in fig. 5), the file name, the size of the file, and the duration of the file are obtained.
For example, the following steps are carried out: and receiving the operation of the grouping control by the user, and displaying a plurality of first-level labels in response to the operation. The time for taking the shooting file, the file name, the file size and the file duration can be obtained when the video file is read, and the time can be used as the basis for classification.
In addition, in an example, a sorting option of each first-level tag may also be displayed in the display interface, as shown in fig. 5, a sorting option is added in an area of the first-level tag, the sorting option may be displayed in a form of an up-down arrow, and the down arrow represents sorting of the shooting time from near to far; or, the file names are sorted in a lexicographic order; or sorting the sizes of the files from small to large, sorting the duration of the files from short to long, and sorting the mood index from 10 to 0; conversely, the upward arrows are ordered the opposite way. Here, the conversion of ordering the album files may be performed according to the change up and down arrows.
In another example, when the user enters the display interface shown in fig. 5 for the first time, the display interface defaults to display the target multimedia file according to the display mode of the second-level tag corresponding to the time 51, the display mode of the region where the option is located is different from the display mode of the other unselected regions, which indicates the time when the first-level tag is currently selected, after the time is clicked again, the arrow is turned upwards, the target multimedia files are arranged in reverse order according to the time, from far to near, the operations of the other buttons are the same, the display principle is the same, and details are not described here.
Step 340, in response to the first input, obtaining a second level label corresponding to the first level label.
And 350, displaying the target multimedia file according to the display mode of each second-level label in the plurality of second-level labels. Wherein each of the first level label and the second level label indicates a display mode.
The following describes the display of the target multimedia file according to the display mode of the second-level tag.
(1) The second level tag corresponding to time 61 includes: the time line of day, week, month or year, as shown in fig. 6, displays a plurality of video information in order of the distance and the distance, with the second level label for each day.
(2) The second level label corresponding to name 71 includes: the lexicographic order of the file names (e.g., file name initials), as shown in fig. 7, when an operation of selecting a file name is received, the plurality of video information is displayed in a sorted manner in the order of the initials of the file names.
(3) The second level tags corresponding to the size 81 of the file include: at least one preset threshold, as shown in fig. 8, when an operation of selecting a file size is received, the plurality of pieces of video information are sequentially displayed according to the at least one preset threshold, that is, the plurality of pieces of video information are classified and displayed according to a second-level label smaller than or equal to 10 megabytes, a second-level label larger than 10 megabytes and smaller than or equal to 100 megabytes, and a second-level label larger than 100 megabytes and smaller than or equal to 1 terabyte.
(4) The second level tag corresponding to duration 91 of the file includes: at least one preset time length threshold, as shown in fig. 9, when receiving an operation of selecting a file time length, the multiple pieces of video information are displayed in a classified manner according to a second level label less than or equal to 1 minute, a second level label greater than 1 minute and less than or equal to 10 minutes, and a second level label greater than 10 minutes and less than or equal to 60 hours.
(5) The second level label corresponding to the mood index 101 comprises: at least one index level N, (N is an integer greater than 1, in the embodiment of the present invention, N is an integer from 1 to 10, 0 represents sadness, 10 represents joy, which may be set by a user, and is set to 5 by default in a case where the user does not set), as shown in fig. 10, when an operation of selecting a mood index is received, a plurality of video information are classified and displayed in order from 1 to 10 according to the index level.
It should be noted that the index rating of a certain video information may prompt the user to set the index rating of a certain video when the user shoots (or downloads) the video.
(6) The second level tags corresponding to weather 111 include: when receiving an operation of selecting weather, the user may classify and display a plurality of pieces of video information according to the plurality of second-level tags as shown in fig. 11.
It should be noted that these second level tags may be determined according to at least one of the following: and acquiring weather information of the geographic position of a certain video according to the weather information related in the certain video content.
(7) Acquiring a second-level tag corresponding to the location 121 of the shooting file includes: country location, province (or state) location, city location, location of the region, etc. As shown in fig. 12, when an operation of selecting a place to acquire a shooting file is received, a plurality of pieces of video information are classified and displayed according to the plurality of second-level tags.
It should be noted that in the same country, the geographic location may be directly displayed in a city of a province; if the video information corresponds to a plurality of countries, the geographic locations may be divided according to the countries, and then the provinces (or states) and cities in each country may be divided.
Therefore, the information display mode provided by the embodiment of the invention can display massive target multimedia files according to the display mode matched with the second-level tags corresponding to the time of acquiring the shot files, the file names, the file sizes, the file duration, the weather, the places of acquiring the shot files or the mood indexes, so that the interaction between a user and an interface is greatly enriched, and the user can conveniently and quickly find out a desired video according to the tags of the corresponding levels.
In addition, the embodiment of the present invention further provides an example of the information display method provided by the embodiment of the present invention, where the user acquires the target multimedia file in the album application, and the person in the type of the first-level tag is taken as an example.
Fig. 13 is a flowchart of another information display method based on an album application according to an embodiment of the present invention.
As shown in fig. 13, the method may include steps 1310-1370, as follows:
the principle of steps 1310-1320 is the same as that of steps 310 and 320, and thus, the specific contents of steps 1310 and 1320 can refer to the contents of steps 310 and 320, which are not described herein again.
Step 1330, receive a first input from the user to a first level tag of the N tags.
Wherein, a plurality of first-level labels can be directly displayed in the display interface shown in fig. 5; alternatively, a user operation on a grouping control in a display interface as shown in fig. 5 is received, and a plurality of first-level labels are displayed in the display interface.
Wherein the type of first level tag includes a person.
Step 1340, in response to the first input, acquiring a second-level label corresponding to the first-level label, and displaying at least one face image.
Wherein at least one face image is from the face image in the video information (or image). Here, as shown in fig. 14, the person attributes of the second level tag corresponding to the person 141 may include: easy and difficult to use.
Step 1350, receiving a third input from the user to the target face image of the at least one face image.
As shown in fig. 14, a third input that the user selects a target face image 142 among the at least one face image is received.
Step 1360, in response to the third input, extracts at least one video clip corresponding to the target face image in the album application.
Here, the extracted at least one video segment may be a complete video segment, or a segment of a certain video containing the target face image.
Step 1370, displaying at least one second-level label corresponding to the target face image, and displaying the target multimedia file according to the display mode matched with the second-level label; wherein each second level tag corresponds to a person attribute.
Here, the expression of a target face image in at least one video segment is recognized, and a first target video segment corresponding to distraction and a second target video segment corresponding to difficulty are obtained; referring to fig. 15, the first target video segment and the second target video segment are displayed in a classified manner.
Further, the first target video segment and the second target video segment can be displayed in a classified manner according to the sequence of the duration.
Therefore, the embodiment of the invention provides an information display method, a user can select a person in video information, the video information is retrieved and segmented according to a first-level label and a display mode of a second-level label corresponding to the first-level label, the expression of the person which is happy and difficult is used as the second-level label, and the video information is classified and displayed according to the first-level label, so that the user can quickly view related video information or sub-video information.
In summary, in the embodiment of the present invention, a plurality of second-level tags corresponding to a first-level tag (for example, whether an expression of a face image corresponding to the character tag is distracted or difficult) are obtained by responding to a first operation of a user selecting the first-level tag (for example, the character tag) in a target application (for example, an album), and a target multimedia file in the target application is displayed based on a display manner of each second-level tag. Therefore, when a user wants to view a required target file (such as an image or a video), the target file corresponding to the plurality of second-level tags of the first-level tags can be displayed only by selecting the first-level tags, and the user does not need to find the target file from the beginning to the end in a huge amount of photos or videos. Therefore, the method provided by the embodiment of the invention can quickly screen and display the target file required by the user, and improve the efficiency of acquiring the target file while ensuring the accuracy of screening the target file.
Fig. 16 is a schematic structural diagram of an information display device according to an embodiment of the present invention.
As shown in fig. 16, the information display device 160 may include:
the transceiver module 1601 is configured to receive a first input of a user to a first level tag of the N tags;
a display module 1602, configured to respond to the first input and display the target multimedia file in a display manner matched with the second-level tag corresponding to the first-level tag;
wherein N is a positive integer, and each of the first-level label and the second-level label indicates a display mode respectively.
Therefore, a plurality of second-level labels (for example, whether the expression of the face image corresponding to the character label is happy or difficult) corresponding to the first-level labels are obtained in response to a first operation that a user selects the first-level labels (for example, the character labels) in a target application program (for example, an album), and the target multimedia files in the target application program are displayed based on the display mode of each second-level label. Therefore, when a user wants to view a required target file (such as an image or a video), the target file corresponding to the plurality of second-level tags of the first-level tags can be displayed only by selecting the first-level tags, and the user does not need to find the target file from the beginning to the end in a huge amount of photos or videos. Therefore, the device provided by the embodiment of the invention can quickly screen and display the target file required by the user, improves the efficiency of acquiring the target file while ensuring the accuracy of screening the target file, and is convenient and time-saving to operate.
The type of the first-level tag involved in the embodiment of the present invention includes at least one of the following: shooting attributes, weather, and obtaining the location, mood index and figures of a shooting file;
the shooting attributes include at least one of: and acquiring the time for shooting the file, the name of the file, the size of the file and the duration of the file.
Based on this, in one possible embodiment, the display module 1602 is further configured to, in response to the first input, display a second level label;
the transceiver module 1601 is further configured to receive a second input of a target second-level tag in the second-level tags from the user;
the display module 1602 is specifically configured to, in response to the second input, display the target multimedia file according to a display mode matched with the target second-level tag.
Further, when the first-level tag is a person tag, the transceiver module 1601 is further configured to receive a third input of the user to a target face image in the at least one face image;
the display module 1602 is further configured to display at least one second-level label corresponding to the target face image; wherein each second level tag corresponds to a person attribute.
Therefore, the user can select the target face image in the video file, retrieve and segment the file in the application program according to the target face image, and use the object distraction and the difficult expression in the character attributes as the second-level label, so that the user can quickly screen and view the related target multimedia file.
It should be noted that, in order to facilitate viewing by a user, the display module 1602 provided in the embodiment of the present invention is specifically configured to display the target multimedia files in a classified manner according to the length sequence of the durations of the target multimedia files.
Fig. 17 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
The electronic device 1700 includes, but is not limited to: radio frequency unit 1701, network module 1702, audio output unit 1703, input unit 1704, sensor 1705, display unit 1706, user input unit 1707, interface unit 1708, memory 1709, processor 1710, and power supply 1711. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 17 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The user input unit 1707 is configured to receive a first input of a user to a first level tag of the N tags;
the processor 1710 is configured to, in response to the first input, control the display unit 1706 to display the target multimedia file in a display manner matched with the second-level tag corresponding to the first-level tag.
Wherein N is a positive integer, and each of the first-level label and the second-level label indicates a display mode respectively.
Therefore, a plurality of second-level labels (for example, whether the expression of the face image corresponding to the character label is happy or difficult) corresponding to the first-level labels are obtained in response to a first operation that a user selects the first-level labels (for example, the character labels) in a target application program (for example, an album), and the target multimedia files in the target application program are displayed based on the display mode of each second-level label. Therefore, when a user wants to view a required target file (such as an image or a video), the target file corresponding to the plurality of second-level tags of the first-level tags can be displayed only by selecting the first-level tags, and the user does not need to find the target file from the beginning to the end in a huge amount of photos or videos. The target files required by the user can be rapidly screened and displayed, the accuracy of screening the target files is guaranteed, meanwhile, the efficiency of obtaining the target files is improved, and the operation is convenient and fast, and time is saved.
It should be understood that, in the embodiment of the present invention, the rf unit 1701 may be configured to receive and transmit signals during a message transmission or a call, and specifically, receive downlink resources from a base station and then process the received downlink resources to the processor 1710; in addition, the uplink resource is transmitted to the base station. In general, radio frequency unit 1701 includes, but is not limited to, an antenna, at least one amplifier, transceiver, coupler, low noise amplifier, duplexer, and the like. The radio frequency unit 1701 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 1702, such as to assist the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 1703 may convert an audio resource received by the radio frequency unit 1701 or the network module 1702 or stored in the memory 1709 into an audio signal and output as sound. Also, the audio output unit 1703 may provide audio output related to a specific function performed by the electronic apparatus 1700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 1703 includes a speaker, a buzzer, a receiver, and the like.
Input unit 1704 is used to receive audio or video signals. The input Unit 1704 may include a Graphics Processing Unit (GPU) 17041 and a microphone 17042, the Graphics processor 17041 Processing image resources of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 1707. The image frames processed by the graphics processor 17041 may be stored in the memory 1709 (or other storage medium) or transmitted via the radio frequency unit 1701 or the network module 1702. The microphone 17042 may receive sound and may be capable of processing such sound into an audio asset. The processed audio resources may be converted to a format output transmittable to a mobile communication base station via the radio frequency unit 1701 in case of a phone call mode.
The electronic device 1700 also includes at least one sensor 1705, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 17061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 17061 and the backlight when the electronic device 1700 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 1705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 1706 is used to display information input by the user or information provided to the user. The Display unit 1706 may include a Display panel 17061, and the Display panel 17061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 1707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 1707 includes a touch panel 17071 and other input devices 17072. Touch panel 17071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 17071 or near touch panel 17071 using a finger, stylus, or any other suitable object or attachment). The touch panel 17071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1710, and receives and executes commands sent by the processor 1710. In addition, the touch panel 17071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to touch panel 17071, user input unit 1707 may include other input devices 17072. In particular, the other input devices 17072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 17071 can be overlaid on the display panel 17061, and when the touch panel 17071 detects a touch operation on or near the touch panel, the touch operation is transmitted to the processor 1710 to determine the type of the touch event, and then the processor 1710 provides a corresponding visual output on the display panel 17061 according to the type of the touch event. Although the touch panel 17071 and the display panel 17061 are shown in fig. 17 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 17071 may be integrated with the display panel 17061 to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 1708 is an interface for connecting an external device to the electronic apparatus 1700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless resource port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 1708 may be used to receive input (e.g., resource information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 1700 or may be used to transmit resources between the electronic apparatus 1700 and the external device.
The memory 1709 may be used to store software programs as well as various resources. The memory 1709 may mainly include a storage program area and a storage resource area, where the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage resource area may store resources (such as audio resources, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1709 may include high speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, or other volatile solid state storage device.
The processor 1710 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions and processing resources of the electronic device by running or executing software programs and modules stored in the memory 1709 and calling resources stored in the memory 1709, thereby integrally monitoring the electronic device. Processor 1710 may include one or more processing units; preferably, the processor 1710 can integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1710.
The electronic device 1700 may further include a power source 1711 (e.g., a battery) for powering the various components, and preferably, the power source 1711 may be logically coupled to the processor 1710 via a power management system to manage charging, discharging, and power consumption via the power management system.
In addition, the electronic device 1700 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 1710, a memory 1709, and a computer program stored in the memory 1709 and capable of running on the processor 1710, where the computer program, when executed by the processor 1710, implements each process of the above-described information display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
Embodiments of the present invention also provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed in a computer, the computer is caused to execute the steps of the information display method of an embodiment of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.