CN120540770A - Information display method and device and electronic equipment - Google Patents
Information display method and device and electronic equipmentInfo
- Publication number
- CN120540770A CN120540770A CN202510726520.9A CN202510726520A CN120540770A CN 120540770 A CN120540770 A CN 120540770A CN 202510726520 A CN202510726520 A CN 202510726520A CN 120540770 A CN120540770 A CN 120540770A
- Authority
- CN
- China
- Prior art keywords
- information
- target
- image
- desktop image
- desktop
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure provides an information display method, which comprises the steps of obtaining a target instruction, wherein the target instruction is used for indicating an electronic device to display target information in a first running state, obtaining first information based on the target information, enabling the first information to identify content of the target information, generating a target image based on at least the first information, and enabling the target image to serve as a target desktop image of the electronic device, so that the electronic device can display the target information in a second running state. The disclosure also provides an information display device and an electronic device.
Description
Technical Field
The disclosure relates to the technical field of image processing, and in particular relates to an information display method, an information display device and electronic equipment.
Background
In the present digital age, electronic devices have become the core tools for people's life and work. However, when the user performs the interactive operation using the electronic device, information irrelevant to the interactive operation, such as popularization information, is popped up suddenly, which affects the user experience.
Disclosure of Invention
One aspect of the disclosure provides an information display method, which comprises the steps of obtaining a target instruction, wherein the target instruction is used for indicating an electronic device to display target information in a first operation state, obtaining first information based on the target information, enabling the first information to identify content of the target information, and generating a target image based on at least the first information, wherein the target image is used as a target desktop image of the electronic device, so that the electronic device can display the target information in a second operation state.
Optionally, generating the target image based on at least the first information includes generating the target image based on the first information and the first desktop image using a first model in response to the degree of matching of the first information with the first desktop image meeting a condition, the first desktop image being a desktop image displayed by the electronic device, generating the target image based on the first information and the second desktop image using the first model in response to the degree of matching of the first information with the first desktop image not meeting a condition, the second desktop image including an image displayed as a desktop image in the electronic device, and generating the target image based on the first information, the first desktop image, and the second desktop image using the first model in response to the degree of matching of the first information with the first desktop image not meeting a condition.
Optionally, generating the target image at least based on the first information comprises the steps of generating a local image based on the first information and the first desktop image by using a first model to obtain the locally generated target image in response to the first information being of a first type and/or the matching degree of the first information and the first desktop image meeting a condition, and generating an overall image based on the first information and the first desktop image by using the first model to obtain the integrally generated target image in response to the first information being of a second type and/or the matching degree of the first information and the first desktop image meeting a condition.
Optionally, the information display method further comprises determining that the matching degree of the first information and the first desktop image meets the condition if the description scene of the first information and the description scene of the first desktop image are matched.
Optionally, generating the target image by using the first model based on the first information, the first desktop image and the second desktop image comprises determining a first target area from a plurality of areas in the second desktop image based on the matching degree between the scene described by the respective pictures of the plurality of areas in the second desktop image and the description scene in the first information, fusing the first information to the first target area to obtain a fused image, determining a second target area from the plurality of areas in the first desktop image based on the matching degree between the scene described by the respective pictures of the plurality of areas in the first desktop image and the description scene of the second desktop image, and fusing the fused image to the second target area to obtain the target image.
Optionally, the target information comprises an image and/or text, and the first information is obtained based on the target information, wherein the first information comprises one of extracting the image or the text from the target information as the first information, processing the target information by using the second model to generate the first information, and the first information is summary information of the target information or the image generated according to the summary information.
Optionally, the information display method further comprises displaying the first desktop image in the case that the operation of the target information in the target desktop image is not detected within a predetermined period of time.
Optionally, the information display method further comprises the step of displaying a replacement button in the case that the target information in the target desktop image is detected to be operated or the input control of the electronic device hovers at the position of the target information, wherein the replacement button is used for replacing the target desktop image to be the first desktop image.
Another aspect of the present disclosure provides an information display apparatus including an instruction obtaining module configured to obtain a target instruction, the target instruction being configured to instruct an electronic device to display target information in a first operation state, an information obtaining module configured to obtain first information based on the target information, the first information being capable of identifying content of the target information, and an image generating module configured to generate a target image based at least on the first information, the target image being a target desktop image of the electronic device, so that the electronic device is capable of displaying the target information in a second operation state.
Another aspect of the disclosure provides an electronic device comprising a memory for storing computer instructions, at least one processor for loading the computer instructions to effect obtaining target instructions for instructing the electronic device to display target information in a first operational state, obtaining first information based on the target information, the first information being capable of identifying content of the target information, generating a target image based at least on the first information, and a display screen for displaying a target desktop image.
Another aspect of the present disclosure provides a non-volatile storage medium storing computer executable instructions that when executed are configured to implement a method as any one of the above.
Another aspect of the present disclosure provides a computer program comprising computer executable instructions which when executed are for implementing a method as any one of the above.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Fig. 1 schematically illustrates an application scenario of an information display method, apparatus and electronic device according to an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an information display method according to an embodiment of the present disclosure;
FIG. 3A schematically illustrates a schematic view of a target image according to an embodiment of the disclosure;
FIG. 3B schematically illustrates a schematic view of a target image according to another embodiment of the present disclosure;
FIG. 3C schematically illustrates a schematic view of a target image according to another embodiment of the present disclosure;
FIG. 3D schematically illustrates a schematic view of a target image according to another embodiment of the present disclosure;
FIG. 4A schematically illustrates a schematic view of a first target area according to an embodiment of the disclosure;
FIG. 4B schematically illustrates a schematic view of a second target area according to an embodiment of the disclosure;
FIG. 5 schematically shows a block diagram of an information display apparatus according to an embodiment of the present disclosure, and
Fig. 6 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
Thus, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). Additionally, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon, the computer program product being usable by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a computer-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of a computer-readable medium include magnetic storage devices such as magnetic tape or hard disk (HDD), optical storage devices such as compact disk (CD-ROM), memory such as Random Access Memory (RAM) or flash memory, and/or wired/wireless communication links.
In the process of implementing the embodiment of the disclosure, it is found that when a user performs interactive operation by using an electronic device, popup windows of popularization information, such as advertisement windows, popularization products or services, and the like, can affect user experience. For example, when a user is browsing web pages, editing documents, or watching videos, advertising shots, promoting products or services, etc. emerge from the screen around, interrupting the user's mind and even having to pause the current task to process these shots, resulting in a blocked current task. In addition, the contents in part of popup windows are complicated and messy, and inconsequential advertisements or misleading information exists, so that the user needs to consume extra time and effort to screen and close, and the operation burden is increased.
In an example, although the popup window is changed to pop up the whole page, the user interaction operation is still affected and the user experience is also affected because the user is performing interaction operation using the electronic device and popping up the whole page suddenly.
In another example, although the cloud may be utilized to regenerate the pop-up popularization information in combination with the user's preference, thereby reducing the degree of influence on the user experience, the process of regenerating by utilizing the cloud requires the consumption of computing resources, and when the user's preference is combined, the user privacy problem is involved.
In view of the above, an embodiment of the present disclosure provides an information display method, including obtaining a target instruction, where the target instruction is used to instruct an electronic device to display target information in a first operation state, obtaining first information based on the target information, where the first information can identify content of the target information, and generating a target image based at least on the first information, where the target image is used as a target desktop image of the electronic device, so that the electronic device can display the target information in a second operation state.
Fig. 1 schematically illustrates an application scenario of an information display method, an apparatus and an electronic device according to an embodiment of the present disclosure.
Specifically, as shown in fig. 1, the application scenario 100 according to this embodiment may include a first terminal device 101, a second terminal device 102, a third terminal device 103, a network 104, and a server 105. The network 104 is a medium used to provide a communication link between the first terminal device 101, the second terminal device 102, the third terminal device 103, and the server 105. The network 104 may include various connection types, such as wired and/or wireless communication links, and the like.
The user may interact with the server 105 via the network 104 using the first terminal device 101, the second terminal device 102, the third terminal device 103, to receive or send messages etc. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, and/or social platform software, etc. (by way of example only) may be installed on the first terminal device 101, the second terminal device 102, the third terminal device 103.
The first terminal device 101, the second terminal device 102, the third terminal device 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server providing various services, such as a background management server (by way of example only) providing support for websites browsed by the user using the first terminal device 101, the second terminal device 102, and the third terminal device 103. The background management server can perform processing such as identification and the like on the received user input and feed back a processing result to the terminal equipment.
It should be noted that, the information display method provided in the embodiment of the present disclosure may be generally performed by the server 105. Accordingly, the information display apparatus provided by the embodiments of the present disclosure may be generally provided in the server 105. The information display method provided by the embodiments of the present disclosure may also be performed by a server or a server cluster that is different from the server 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server 105. Accordingly, the information display apparatus provided by the embodiments of the present disclosure may also be provided in a server or a server cluster that is different from the server 105 and is capable of communicating with the first terminal device 101, the second terminal device 102, the third terminal device 103, and/or the server 105. Or the information display method provided by the embodiment of the present disclosure may be performed by the first terminal device 101, the second terminal device 102, or the third terminal device 103, or may be performed by other terminal devices other than the first terminal device 101, the second terminal device 102, or the third terminal device 103. Accordingly, the information display apparatus provided by the embodiments of the present disclosure may also be provided in the first terminal device 101, the second terminal device 102, or the third terminal device 103, or in other terminal devices different from the first terminal device 101, the second terminal device 102, or the third terminal device 103.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 2 schematically illustrates a flowchart of an information display method according to an embodiment of the present disclosure.
Specifically, as shown in fig. 2, the method includes operations S210 to S230.
In operation S210, a target instruction is obtained.
In operation S220, first information is obtained based on target information.
In operation S230, a target image is generated based on at least the first information, and the target image is used as a target desktop image of the electronic device, so that the electronic device can display the target information in the second operation state.
In this embodiment, the target instructions may be for instructing the electronic device to display the target information in the first operating state.
The electronic device may include an interactive device having a display screen, for example, may include, but is not limited to, a tablet, a notebook, a desktop, a smart television, and the like. Tools software may be installed on the interactive device, which may include, for example, but not limited to, a browser, an application program, and the like.
The first running state may be used to indicate an interface state presented to the user through a display screen when the user performs an interactive operation using the electronic device in which the tool software is installed, and may include, for example, but not limited to, an interface state presented when a website is input in a browser, an interface state of a certain function module in an application program, and the like.
The target information may be used to indicate information within the pop-up window that needs to be presented that is not related to the user interaction, such as promotional information. Promotional information can include, but is not limited to, advertising information, product information, information, and the like.
The first information can identify the content of the target information. The first information may include, but is not limited to, advertisement information, product information, part or all of information, etc.
The target image may include, but is not limited to, an image for presenting the first information.
The second running state may be used to indicate a state when the user ends the interactive operation or returns to the desktop in the background of the interactive operation, for example, may include, but is not limited to, the user closing the browser and returning to the desktop, or the user returning to the desktop after minimizing the browser, etc.
The target desktop image may be a desktop image of the electronic device after the target image is generated.
For example, a user who wants to display target information, such as an information promoter, may issue a target instruction, and typically, the electronic device triggers execution of the target instruction in a first operation state and then displays the target information, but in this embodiment, the server obtains the target instruction, generates a target image based on the target information, and uses the target image as a target desktop image of the electronic device, and the electronic device displays the target information in a second operation state and does not display the target information in the first operation state.
For example, when the information promoter needs to search information through a computer webpage, pop up the promotion information on the webpage interface or pop up another webpage interface to display the promotion information, the information promoter issues a target instruction, the target instruction is not executed when the user searches information through the computer webpage, pop up the promotion information on the webpage interface or pop up another webpage interface to display the promotion information, but the server obtains the target instruction, generates a target image for identifying the promotion information based on the promotion information to be displayed indicated by the target instruction, sets the target image as a target desktop image of the computer, and displays the promotion information on the desktop of the computer when the user finishes searching or returns the webpage to the desktop in a minimized manner.
Because the electronic equipment displays the target information in the first running state, the interactive operation of the user of the electronic equipment in the first running state can be influenced, and therefore, through the method provided by the disclosure, the electronic equipment does not display the target information in the first running state, and the user experience of the electronic equipment in the first running state is enhanced. And in the second running state, the target information is fused into the target desktop image to display the target information, so that the target information can be displayed to the user on the basis of not influencing the interactive operation of the user, and the problem that the user experience is influenced by popping up popularization information suddenly in the process of using the electronic equipment to carry out the interactive operation is at least partially solved.
The method shown in fig. 2 is further described with reference to fig. 3A to 4 in conjunction with the embodiment.
In this embodiment, the target information may include images and/or text.
By way of example, the target information may include images and/or text that are not related to user interaction, and may include, for example, but not limited to, promotional product images, promotional product introduction text, informational text, and the like.
In an example, for operation S220 as shown in FIG. 2 above, obtaining the first information based on the target information may include an operation of extracting an image or text from the target information as the first information.
The first information may be summary information of the target information or an image generated from the summary information. The summary information may be used to characterize information of the target object in the target information. For example, for a promotional product image, summary information may be used to characterize promotional product information, such as a minimal unit map of the promotional product or promotional product introduction text, and the like. For information text, summary information may be used to characterize information about the information subject, such as information subject images or information subject text.
For example, taking the target information as a popularization mobile phone image as an example, when the popularization mobile phone image further comprises the description text, the description text of the related mobile phone can be extracted from the popularization mobile phone image through a text recognition tool to serve as the first information. Taking the target information as an information text as an example, the information subject text can be extracted from the information text by recognizing the information keyword through natural language processing technology as the first information. It should be noted that, in the embodiments of the present disclosure, the extraction method is not specifically limited, and the above examples are only specifically described.
In another example, for operation S220 as shown in FIG. 2 above, obtaining the first information based on the target information may include processing the target information using a second model to generate the first information.
The second model may include a deep learning model for identifying text information from the target information, generating the first information. For example, when the target information is an image, characters in the image can be identified by using the deep learning model, so as to obtain the first information. In the case where the target information is text, the first information, such as abstract information or an image generated from the abstract information, may be generated by learning context information and semantic relationships of each word in the text using a deep learning model.
In the case where the target information is an image, the second model may be an image cut model for extracting a region of interest, such as a minimum unit diagram of a target object, from the image by segmentation, and the like, and taking the region of interest as the first information.
Because the first information is extracted from the image and/or the text or the first information is generated based on the second model, the key information in the image and/or the text can be extracted, and therefore, when the target image is generated based on the first information and is used as the target desktop image of the electronic equipment, the key information in the image and/or the text can be accurately displayed to the user while the generation efficiency of the target image is improved.
In one embodiment, in addition to operations S210-S230 described above with reference to FIG. 2, operations may be included of determining that the degree of matching of the first information with the first desktop image satisfies a condition if the description scene of the first information matches the description scene of the first desktop image. For brevity of description, descriptions of operations S210 to S230 are omitted here.
For example, the first desktop image may be a desktop image that the electronic device displays before generating the target image. In the case where the description scene of the first information is completely identical to the description scene of the first desktop image, it may be determined that the description scene of the first information matches the description scene of the first desktop image. When the description scene of the first information is not completely consistent with the description scene of the first desktop image, but the description scene after the first information is combined with the first desktop image accords with natural logic, it may also be determined that the description scene of the first information matches with the description scene of the first desktop image. Under the condition that the description scene of the first information is not completely consistent with the description scene of the first desktop image and the description scene after the first information is combined with the first desktop image does not accord with natural logic, the fact that the description scene of the first information is not matched with the description scene of the first desktop image, namely the matching degree of the first information and the first desktop image does not meet the condition can be determined.
For example, the description scene of the first information is a display mobile phone, the description scene of the first desktop image is a high-lift trophy after a team leader takes a cap, although the description scene of the first desktop image and the description scene are not completely consistent, the description scene after the two description scenes are combined can be the high-lift trophy after the team leader takes a cap, the mobile phone and the like, and the mobile phone can be used as a prize to appear in a team leader hand and accords with natural logic, so that the description scene matching of the mobile phone and the high-lift trophy after the team leader takes a cap is displayed. In contrast, when the description scene of the first desktop image is the natural landscape of the blue sky and the white cloud, the description scene after the two are combined is the natural landscape of the blue sky and the white cloud and the description scene of the natural landscape of the blue sky and the white cloud is not matched as the mobile phone generally cannot appear in the blue sky and the white cloud and does not accord with natural logic.
The determination of the description scene of the first information and the description scene of the first desktop image is not specifically limited in the embodiment of the present disclosure, and may be determined according to the first information and summary information of the first desktop image, for example.
When the electronic equipment is changed from the first operation state to the second operation state, if the matching degree of the first information and the first desktop image meets the condition, the target information in the target background image is not abrupt when the target image generated based on the first information is taken as the target background image, and obvious visual conflict of a user due to the abrupt target information in the image is avoided. In contrast, if the matching degree between the first information and the first desktop image does not meet the condition, when the target image generated based on the first information is used as the target background image, the target information in the target background image is abrupt, so that obvious visual conflict can be generated by the user, and the objection of the user is easily caused.
Based on this, in an example, for operation S230 as shown in FIG. 2 above, generating a target image based on at least the first information may include an operation of generating a target image using a first model based on the first information and the first desktop image in response to the degree of matching of the first information with the first desktop image satisfying a condition.
The first desktop image may be a desktop image displayed by the electronic device. The first model may be a different module of the same model as the second model. The first model may be used to fuse the first information with the first desktop image to generate the target image.
For example, when the first information is text, the first information may be displayed in the first desktop image. When the first information is an image, the first information may be fused to a target area or the like in the first desktop image.
Fig. 3A schematically illustrates a schematic diagram of a target image according to an embodiment of the present disclosure.
For example, as shown in fig. 3A, taking the first information as an image 301 and a first desktop image 302 as an example, the description scene of the image 301 is a mobile phone, the description scene of the first desktop image 302 is that a person exercises outdoors, although the description scenes are not completely consistent, the description scene after combining the two may be that a person lifts a mobile phone outdoors, and the like, because the person lifts the mobile phone to conform to natural logic, the description scene of the image 301 and the description scene of the first desktop image 302 match, that is, the matching degree of the image 301 and the first desktop image 302 satisfies the condition.
The image 301 and the first desktop image 302 may be input into the first model, outputting a target image 303 that fuses the image 301 and the first desktop image 302.
Because the matching degree of the first information and the first desktop image meets the condition, the target image generated by the first information is fused in the first desktop image, and the user can not generate obvious visual conflict when the target image is used as a target background image, so that the target information can be displayed to the user, and the degree of objection of the user to modifying the background image can be reduced.
In another example, for operation S230 as shown in FIG. 2 above, generating the target image based at least on the first information may include an operation of generating the target image using the first model based on the first information and the second desktop image in response to the degree of matching of the first information with the first desktop image not meeting the condition.
The second desktop image may include an image displayed as a desktop image in the electronic device. For example, the second desktop image may be an image displayed as a desktop image in an electronic device whose matching degree with the first information satisfies a condition, so that the problem that the target information in the generated target image is abrupt is avoided. The first model may be used to fuse the first information with the second desktop image to generate the target image.
Fig. 3B schematically illustrates a schematic view of a target image according to another embodiment of the present disclosure.
For example, as shown in fig. 3B, taking the first information as an image 301 and a first desktop image 304 different from the first desktop image 302 in the above example fig. 3A, and taking the second desktop image 305 as an example, the description scene of the image 301 is a mobile phone, the description scene of the first desktop image 304 is an outline screen around a house, and the matching degree of the image 301 and the first desktop image 304 does not satisfy the condition.
The image 301 and the second desktop image 305 may be input into the first model, outputting a target image 306 that fuses the image 301 and the second desktop image 305.
Although the matching degree of the first information and the first desktop image does not meet the condition, when the target image is generated, the target image generated by the first information is fused in the second desktop image, and the second desktop image is used as the image displayed by the desktop image, so that the user preference can be combined, and the problem that the target information is abrupt in the generated target image is avoided while the target information is displayed to the user. However, the generated target image is changed more than the first desktop image, and is also likely to cause the user's dislike.
Based on this, in another example, for operation S230 as shown in FIG. 2 above, generating a target image based at least on the first information may include an operation of generating a target image using a first model based on the first information, the first desktop image, and the second desktop image in response to the degree of matching of the first information with the first desktop image not meeting a condition.
The first model can fuse the first information, the first desktop image and the second desktop image to obtain a target image.
For example, the first information may be fused to the second desktop image to obtain an intermediate fused image, and then the intermediate fused image may be fused to the target area of the first desktop image.
Fig. 3C schematically illustrates a schematic view of a target image according to another embodiment of the present disclosure.
For example, as shown in fig. 3C, taking the first information as an image 301 and a first desktop image 304, and taking the second desktop image 305 as an example, the description scene of the image 301 is a mobile phone, the description scene of the first desktop image 304 is an outline picture around a house, and the matching degree between the image 301 and the first desktop image 304 does not satisfy the condition.
Image 301 and second desktop image 305 may be input into a first model, an intermediate image fusing image 301 into second desktop image 305 may be output, then intermediate image and first desktop image 304 may be input into a first model, and an intermediate image fusing to a target area of first desktop image 304 may be output, resulting in target image 307.
The target image is generated based on the first information, the first desktop image and the second desktop image, and the first desktop image is not directly replaced, so that the problem of user visual conflict caused by scene mismatch can be avoided in consideration of the problem of matching degree.
Fig. 3D schematically illustrates a schematic view of a target image according to another embodiment of the present disclosure.
In another example, for operation S230 shown in FIG. 2 above, generating the target image based at least on the first information may include an operation of generating a local map based on the first information and the first desktop image using the first model in response to the first information being of the first type and/or the degree of matching of the first information with the first desktop image meeting a condition, resulting in a locally generated target image.
The first type may include a text type. The local generation may retain a majority of the first desktop image or information of the first information. When the first information is of a text type, the first model can be used for extracting text features of the first information, extracting local image features of the first desktop image, and fusing the text features with high similarity of the local image features, so that a locally generated target image is generated. Under the condition that the matching degree of the first information and the first desktop image meets the condition, whether the first information is of a text type or an image type, the first model can be utilized to locally modify the existing image, and a locally generated target image is obtained. The first model may, for example, generate an antagonism network for local feature refinement, which is not particularly limited by the present disclosure.
For example, as shown in fig. 3D, with the example of the first information being the image 301 and the first desktop image 302 in fig. 3A, the first desktop image 302 may be locally modified based on the image 301, to obtain a locally generated target image 308.
In another example, for operation S230 shown in FIG. 2 above, generating the target image based at least on the first information may include an operation of generating an overall map based on the first information and the first desktop image using the first model in response to the first information being of the second type and/or the degree of matching of the first information with the first desktop image meeting a condition, resulting in an overall generated target image.
The second type may include an image type. The whole generated image is a regenerated whole image, and the first information and all the characteristics of the first desktop image can be fused through the first model. The first model may be, for example, a generation countermeasure network in a deep learning model, etc., as this disclosure does not specifically limit.
For example, if the first information is of an image type, all the features of the first information and the first desktop image may be fused to obtain the integrally generated target image.
Since the local generation generally processes only a portion of the image, less computing resources and time are required compared to the global generation, which is advantageous in that the target image can be obtained quickly even in the case of limited device performance. Further, by locally generating the map, the influence on the user's vision can be minimized.
Fig. 4A schematically illustrates a schematic view of a first target area according to an embodiment of the present disclosure, and fig. 4B schematically illustrates a schematic view of a second target area according to an embodiment of the present disclosure.
In a specific example, generating a target image using the first model based on the first information, the first desktop image, and the second desktop image may include determining a first target region from among the plurality of regions in the second desktop image based on a degree of matching between a scene described by a picture of each of the plurality of regions in the second desktop image and a descriptive scene in the first information. The method comprises the steps of obtaining a first desktop image, obtaining a first target area, fusing first information to the first target area to obtain a fused image, determining a second target area from the plurality of areas in the first desktop image based on the matching degree between the scene described by the pictures of the plurality of areas in the first desktop image and the description scene of the second desktop image, and fusing the fused image to the second target area to obtain the target image.
The region having the greatest degree of matching between the scene described by the screen in the plurality of regions and the described scene in the first information may be regarded as the first target region. The method for determining the matching degree can be determined based on an evaluation model trained in the cloud in advance. The training samples of the evaluation model may be a pair of sample tag groups describing scenes, such as tags describing scenes for each region in a naturally photographed image, and any two of the sample tag groups may form a pair of sample tag groups describing scenes, and the matching degree of the sample tag groups is 100%, so as to train the evaluation model.
For example, as shown in FIG. 4A, regions A-D are for the second desktop image 305. The matching degree between the scene described by the picture of each of the areas a to D and the description scene in the first information (the first information is the image 301 in fig. 3A) can be determined by using the evaluation model, and then the area C is determined as the first target area from the areas a to D based on the size of the matching degree.
The image 301 may be fused to region C resulting in a fused image (e.g., the target image 306 shown in fig. 3B above).
As shown in FIG. 4A, the region E through the region H in the first desktop image 304 are targeted. The matching degree between the scene described by the picture of each of the areas E to H and the description scene in the second desktop image 305 can be determined by using the evaluation model, and then the area F is determined as the second target area from the areas E to H based on the magnitude of the matching degree. The fused image is fused to the region F, resulting in a target image (target image 307 as shown in fig. 3C described above).
By fusing the first information to the area matched with the scene, a more natural and real visual effect can be achieved, each part of the image is consistent in content and style, the integrity and consistency of the image are enhanced, the visual appeal of the image is improved, and more immersive and pleasant visual experience is provided for a user.
In one embodiment, operations may be included in addition to operations S210-S230 described above with reference to FIG. 2, including the operation of displaying the first desktop image if no operation of the target information in the target desktop image is detected within a predetermined period of time. For brevity of description, descriptions of operations S210 to S230 are omitted here.
For example, the predetermined period may be determined from a period of time between a time when the history pops up the target information to a time when it is operated. The operated may be, for example, a clicked view, etc.
When the target information in the target desktop image is not operated within the preset time period, the user interacting with the electronic equipment is not interested in the target information, and in this case, the target desktop image is replaced by the first desktop image, so that resources required by invalid display of the target information can be avoided, and the resource utilization rate is improved.
In an embodiment, operations may be included in addition to operations S210-S230 described above with reference to FIG. 2, including the operation of presenting a replacement button in the event that it is detected that target information in a target desktop image is operated or an input control of an electronic device hovers over a location of target information. For brevity of description, descriptions of operations S210 to S230 are omitted here.
The replace button may be used to replace the target desktop image with the first desktop image. The input controls may be used by a user to click on view target information.
For example, when the electronic device is a computer, the input control may be a mouse, and when the graphic of the mouse hovers at the position of the target information or when the user clicks the target information through the mouse, the user may select whether to click the replacement button through the mouse to replace the desktop image by displaying the replacement button.
Because the replacement button is displayed when the target information is clicked or the input control hovers at the position of the target information, the user can replace the desktop image in time conveniently. In addition, aiming at the target information with complicated and messy contents, the extra time and energy consumed by the user can be reduced, and the operation burden of the user is reduced, so that the user experience is enhanced.
Fig. 5 schematically shows a block diagram of an information display apparatus according to an embodiment of the present disclosure.
As shown in fig. 5, the information display apparatus 500 includes an instruction obtaining module 510, an information obtaining module 520, and an image generating module 530. The information display device 500 may perform the method described above with reference to fig. 2-4B.
Specifically, the instruction obtaining module 510 may perform, for example, operation S210 for obtaining the target instruction. The target instruction is used for indicating the electronic equipment to display target information in the first running state.
The information obtaining module 520 may perform, for example, operation S220 for obtaining the first information based on the target information. The first information can identify the content of the target information.
The image generation module 530 may perform, for example, operation S230 for generating a target image based on at least the first information, with the target image being a target desktop image of the electronic device, so that the electronic device can display the target information in the second operation state.
Alternatively, the image generation module 530 may include any one of a first sub-generation unit, a second sub-generation unit, and a third sub-generation unit. The first sub-generation unit is used for generating a target image by using a first model based on the first information and the first desktop image in response to the matching degree of the first information and the first desktop image meeting the condition. The first desktop image is a desktop image displayed by the electronic device. The second sub-generation unit is used for generating a target image by using the first model based on the first information and the second desktop image in response to the fact that the matching degree of the first information and the first desktop image does not meet the condition. The second desktop image includes an image displayed in the electronic device as a desktop image. The third sub-generation unit is used for generating a target image by using the first model based on the first information, the first desktop image and the second desktop image in response to the fact that the matching degree of the first information and the first desktop image does not meet the condition.
Alternatively, the image generation module 530 may include a local generation unit and a global generation unit. The local generation unit is used for responding to the first information as the first type and/or the matching degree of the first information and the first desktop image meets the condition, and performing local generation on the first information and the first desktop image by using the first model to obtain a target image which is locally generated. And the integral generating unit is used for responding to the condition that the first information is of the second type and/or the matching degree of the first information and the first desktop image meets the condition, and carrying out integral image generation by utilizing the first model based on the first information and the first desktop image to obtain an integrally generated target image.
Optionally, the information display device 500 further comprises a determination module. The determining module is used for determining that the matching degree of the first information and the first desktop image meets the condition if the description scene of the first information is matched with the description scene of the first desktop image.
Optionally, generating the target image with the first model based on the first information, the first desktop image, and the second desktop image includes determining the first target region from the plurality of regions in the second desktop image based on a degree of matching between a scene described by a picture of each of the plurality of regions in the second desktop image and the descriptive scene in the first information. And fusing the first information to the first target area to obtain a fused image. And determining a second target area from the plurality of areas in the first desktop image based on the matching degree between the scene described by the picture of each of the plurality of areas in the first desktop image and the description scene of the second desktop image. And fusing the fused image to a second target area to obtain a target image.
Optionally, the target information includes an image and/or text. The information obtaining module 520 may include any one of an extracting unit and a processing unit. The extraction unit is used for extracting an image or text from the target information as first information. The processing unit is used for processing the target information by using the second model and generating first information. The first information is summary information of the target information or an image generated from the summary information.
Optionally, the information display device 500 further includes a first display module. The first display module is used for displaying the first desktop image in the condition that the operation of the target information in the target desktop image is not detected within a preset period of time.
Optionally, the information display device 500 further includes a second display module. The second presentation module is used for presenting the replacement button in the case that the target information in the target desktop image is detected to be operated or the input control of the electronic device hovers at the position of the target information. The replacement button is used to replace the target desktop image with the first desktop image.
It is understood that the instruction obtaining module 510, the information obtaining module 520, and the image generating module 530 may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the present disclosure, at least one of instruction acquisition module 510, information acquisition module 520, and image generation module 530 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), programmable Logic Array (PLA), system on a chip, system on a substrate, system on a package, application Specific Integrated Circuit (ASIC), or any other reasonable manner of integrating or packaging circuitry, or the like, hardware or firmware, or any suitable combination of software, hardware, and firmware implementations. Or at least one of the instruction obtaining module 510, the information obtaining module 520, and the image generating module 530 may be at least partially implemented as computer program modules, which when executed by a computer, may perform the functions of the respective modules.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 schematically illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 6, the electronic device 600 may include a memory 610, at least one processor 620, and a display screen 630.
Memory 610 is used to store computer instructions.
At least one processor 620 is used to load computer instructions to implement the methods described above with reference to fig. 2-4B.
The display screen 630 is used to display the target desktop image.
In particular, processor 620 may include, for example, a general purpose microprocessor, an instruction set processor, and/or an associated chipset and/or special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 620 may also include on-board memory for caching purposes. The processor 620 may be a single processing unit or multiple processing units for loading computer instructions to implement different actions for performing the method flows according to the embodiments of the present disclosure described with reference to fig. 2-4B.
The present disclosure also provides a readable storage medium. A readable storage medium may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of readable storage media include magnetic storage devices such as magnetic tape or hard disk (HDD), optical storage devices such as compact disk (CD-ROM), memory such as Random Access Memory (RAM) or flash memory, and/or wired/wireless communication links.
The readable storage medium may include a computer program, which may include code/computer executable instructions that, when executed by the processor 620, cause the processor 620 to perform the method flow described above in connection with fig. 2-4B, and any variations thereof.
The computer program may be configured with computer program code comprising, for example, computer program modules. For example, in an example embodiment, code in a computer program may include one or more program modules. It should be noted that the division and number of modules is not fixed, and that a person skilled in the art may use suitable program modules or combinations of program modules according to the actual situation, which when executed by the processor 620, enable the processor 620 to perform the method flow and any variations thereof as described above in connection with fig. 2-4B.
At least one of the instruction obtaining module 510, the information obtaining module 520, and the image generating module 530 may be implemented as described computer program modules that, when executed by the processor 620, may implement the respective operations described above, according to embodiments of the present disclosure.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure may be combined in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the disclosure. In particular, features recited in various embodiments of the present disclosure may be combined and/or combined in various ways without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The scope of the disclosure should, therefore, not be limited to the above-described embodiments, but should be determined not only by the following claims, but also by the equivalents of the following claims.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510726520.9A CN120540770A (en) | 2025-05-30 | 2025-05-30 | Information display method and device and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510726520.9A CN120540770A (en) | 2025-05-30 | 2025-05-30 | Information display method and device and electronic equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120540770A true CN120540770A (en) | 2025-08-26 |
Family
ID=96777877
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510726520.9A Pending CN120540770A (en) | 2025-05-30 | 2025-05-30 | Information display method and device and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120540770A (en) |
-
2025
- 2025-05-30 CN CN202510726520.9A patent/CN120540770A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11290775B2 (en) | Computerized system and method for automatically detecting and rendering highlights from streaming videos | |
| US11574470B2 (en) | Suggested actions for images | |
| RU2720536C1 (en) | Video reception framework for visual search platform | |
| US20190392487A1 (en) | System, Device, and Method of Automatic Construction of Digital Advertisements | |
| AU2010315818B2 (en) | Multimode online advertisements and online advertisement exchanges | |
| US9613268B2 (en) | Processing of images during assessment of suitability of books for conversion to audio format | |
| CN110134931B (en) | Medium title generation method, medium title generation device, electronic equipment and readable medium | |
| US9390181B1 (en) | Personalized landing pages | |
| US20170212892A1 (en) | Predicting media content items in a dynamic interface | |
| US20150317945A1 (en) | Systems and methods for generating tinted glass effect for interface controls and elements | |
| JP2019531547A (en) | Object detection with visual search queries | |
| WO2018149115A1 (en) | Method and apparatus for providing search results | |
| US20190163714A1 (en) | Search result aggregation method and apparatus based on artificial intelligence and search engine | |
| CN107256109A (en) | Method for information display, device and terminal | |
| KR20160105904A (en) | Modifying advertisement sizing for presentation in a digital magazine | |
| CN112818224A (en) | Information recommendation method and device, electronic equipment and readable storage medium | |
| CN113079417A (en) | Method, device and equipment for generating bullet screen and storage medium | |
| EP3905177A1 (en) | Recommending that an entity in an online system create content describing an item associated with a topic having at least a threshold value of a performance metric and to add a tag describing the item to the content | |
| CN113557504A (en) | System and method for improved search and categorization of media content items based on their destinations | |
| US10372782B1 (en) | Content generation and experimentation using engagement tests | |
| US20180046683A1 (en) | Search word list providing device and method using same | |
| CN115203539B (en) | Media content recommendation method, device, equipment and storage medium | |
| CN116886948A (en) | Information display method and device, electronic equipment and storage medium | |
| CN120540770A (en) | Information display method and device and electronic equipment | |
| US10878471B1 (en) | Contextual and personalized browsing assistant |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |