CN120540654A - Method, apparatus, device, medium and program product for generating user interface - Google Patents
Method, apparatus, device, medium and program product for generating user interfaceInfo
- Publication number
- CN120540654A CN120540654A CN202510772927.5A CN202510772927A CN120540654A CN 120540654 A CN120540654 A CN 120540654A CN 202510772927 A CN202510772927 A CN 202510772927A CN 120540654 A CN120540654 A CN 120540654A
- Authority
- CN
- China
- Prior art keywords
- code
- user
- user interface
- target
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
Embodiments of the present disclosure provide a method, apparatus, device, storage medium, and computer program product for generating a user interface. The method includes determining, based on user input, a prompt for user input. The method also includes obtaining code for the target user interface, the code generated by the target model based on the user input and the prompt information. The method further includes generating a target user interface according to the code, the target user interface displaying a pattern of the prompt message. According to the method of the embodiment of the disclosure, codes for displaying templates can be dynamically generated, the patterns of the templates and the design prompt information do not need to be programmed in the user equipment in advance, and the codes can be rendered at the user equipment to generate a vivid user interface. This may thus improve development efficiency, lower technological thresholds, and improve user experience.
Description
Technical Field
The present disclosure relates generally to the field of computers, and more particularly to methods, apparatuses, devices, computer-readable storage media, and computer program products for generating a user interface.
Background
Rapid advances in computer technology have changed the way information is propagated. People can use electronic equipment to view information transmitted on the Internet, and social media, short video platforms and other novel transmission carriers are emerging, so that multiple information forms such as texts, audios and videos can reach massive users in a fragmented and interactive mode.
In software application development, view components are an important bridge connecting users with information. By utilizing diversified design means, developers construct view components with rich forms, such as lists, charts, cards and the like, and present complex information to users in a visual, vivid and attractive form. The view components not only can flexibly switch the display content according to the operation of the user, but also can guide the user to quickly understand information through interactive effect, visual layering and other designs. The reasonable view component design optimizes the user experience and becomes an indispensable important link in modern software design.
Disclosure of Invention
According to example embodiments of the present disclosure, a method, apparatus, device, computer storage medium, and computer program product for generating a user interface are provided.
In a first aspect of the present disclosure, a method for generating a user interface is provided, the method comprising determining, from a user input, a prompt for the user input. The method also includes obtaining code for the target user interface, the code generated by the target model based on the user input and the prompt information. The method further includes generating a target user interface according to the code, the target user interface displaying a pattern of the prompt message.
In a second aspect of the present disclosure, an apparatus for generating a user interface is provided, the apparatus comprising a prompt determination module configured to determine, from a user input, a prompt for the user input. The apparatus also includes a code acquisition module configured to acquire a code for the target user interface, the code generated by the target model based on the user input and the prompt. The apparatus further includes a user interface generation module configured to generate a target user interface according to the code, the target user interface displaying a pattern of the prompt information.
In a third aspect of the present disclosure, there is provided an electronic device comprising at least one processing unit, at least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, which instructions, when executed by the at least one processing unit, cause the electronic device to perform the method described according to the first aspect of the present disclosure.
In a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon machine executable instructions which, when executed by a device, cause the device to perform a method according to the first aspect of the present disclosure.
In a fifth aspect of the present disclosure, there is provided a computer program product comprising computer executable instructions which, when executed by a processor, implement the method described in accordance with the first aspect of the present disclosure.
The summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary is not intended to identify key features or essential features of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
FIG. 1 illustrates a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;
FIG. 2 illustrates a flow chart of a method for generating a user interface according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a system architecture for generating a user interface according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of the modules of a software development kit according to an embodiment of the present disclosure.
FIG. 5 shows a schematic diagram of data processing of a software development kit according to an embodiment of the present disclosure;
FIG. 6 illustrates a schematic diagram of generating a hint word according to embodiments of the present disclosure;
FIG. 7 illustrates an effect diagram for generating a user interface according to an embodiment of the present disclosure;
FIG. 8 illustrates a schematic block diagram of an example apparatus according to some embodiments of the disclosure;
FIG. 9 illustrates a block diagram of an example device that may be used to implement embodiments of the present disclosure.
The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements.
Detailed Description
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information. It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information. As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
In describing embodiments of the present disclosure, the term "comprising" and its like should be understood to be open-ended, i.e., including, but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like, may refer to different or the same object unless explicitly stated otherwise. Other explicit and implicit definitions are also possible below.
In the related art, a developer needs to pre-encode a presentation template for presenting information in an application when providing the information. Such that when a user queries for the information, the information may be presented through the presentation template. This requires that the presentation templates be pre-encoded by the developer, otherwise only plain text information can be provided, thus having lower development efficiency and higher technical threshold.
In this regard, the present disclosure proposes a method for generating a user interface. The method obtains or retrieves corresponding prompt information which can be any data such as text, image and the like according to any user input, and invokes the target model to generate codes for the patterns of the prompt information according to the user input and the prompt information, so that codes for displaying templates can be dynamically generated, and the patterns for displaying templates and designing the prompt information do not need to be programmed in the user equipment in advance. The method also renders the code at the user device to generate a vivid user interface. This may thus improve development efficiency, lower technological thresholds, and improve user experience.
Embodiments of the present disclosure will be described in further detail below with reference to the drawings, wherein FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. An application 110 and a server 120 on a user device are included in the example environment 100. The application 110 may be any type of business application, for example, the application 110 may be a social application, an application that plays short videos, a news media-type application, and so forth. The server 120 may be deployed with target models (e.g., language models), which are all trained models capable of generating corresponding content upon user request. In some embodiments, the application 110 communicates with the server 120 over a network 130. The network 130 may include a wired network, a wireless network, or a combination thereof for providing communication between the application 110 and the server 120. In this embodiment, the method of the embodiments of the present disclosure is performed by the application 110.
As shown in fig. 1, a user may launch an application 110 on a user device to process traffic. In some embodiments, the application provides an interface to receive user input, for example, a search box may be provided and user input received from the search box. For example, the user may enter the user input 112 "how weather is" in the application 110 and click on a search. Application 110 may perform the methods of embodiments of the present disclosure based on this operation. In some embodiments, in addition to the search boxes typically used for text input, various ways of interaction, such as voice input, gesture input, and the like, are supported. Taking voice input as an example, an application integrates a voice recognition engine to convert the user's voice content into text information in real time. The user can trigger the corresponding operation by inputting contents such as "how weather is" in the search box and clicking the search button, or speaking the query contents by voice. The application 110 may perform the method of the embodiments of the present disclosure based on the user's operation.
To enable most applications to perform the methods of embodiments of the present disclosure, code, models, modules for the methods of embodiments of the present disclosure may be pre-integrated in a software development kit (Software Development Kit, SDK). In the design of the SDK, the modular design concept is adopted, and each module has clear interface definition, so that a developer can conveniently select and integrate according to actual requirements. The SDK may also provide detailed development documents and example code, reducing the developer's usage threshold. The function of the method can be given to the application program 110 by deploying the SDK in the application program 110 in advance, so that the universality and applicability of the method are greatly improved.
In some embodiments, the application 110 determines the prompt for the user input 112 based on the user input 112. The prompt includes answer information for answering the user input 112, as well as information related to the answer. For example, the prompt information is displayed in the target user interface 140, and covers the geographical location information of the user device, and weather information at the location, such as sunny days, rainy days, and the like, and the temperature information includes various weather related data, such as specific temperature values, humidity information, and the like, such as the temperature 142, the weather image 144, the building image 146, and the like shown in fig. 1. Of course, obtaining information related to the privacy of the user must be explicitly granted and licensed by the user.
In some embodiments, a lightweight semantic analysis model may be deployed in the user device to analyze user input to determine user intent. In some embodiments, the semantic analysis model is based on natural language processing techniques, with a pre-trained language model for fine tuning. The semantic analysis model rapidly and efficiently analyzes the input of the user by performing word segmentation, part-of-speech tagging, named entity recognition and other processes on the text input by the user, and determines the intention of the user. In addition, the semantic analysis model can accurately determine the Application programming interface (Application ProgrammingInterface, API) to be called from the software development kit according to the intention of the user. In the calling process of the API, strict authority verification can be performed, so that only legal calling can acquire corresponding data. The application program 110 obtains the corresponding prompt information by calling the determined API, and simultaneously performs cache processing on the obtained data, and if the user requests the same content again within a certain time, the user directly reads the same content from the cache, thereby improving the response speed.
Although not shown, in some embodiments, the determination of the hint information may be accomplished by a remote server, such as server 120. In some cases, some information may be retrieved over the internet, at which point a corresponding API (e.g., a temperature API) may be called to retrieve the most current temperature data from the remote server as a hint. In some embodiments, if the hint information includes data related to user privacy, during the data transmission process, the application 110 uses SSL/TLS encryption protocol to encrypt the transmitted data, so as to ensure the security and privacy of the data.
In some embodiments, the application 110 obtains code 122 for the target user interface, the code 122 being generated by the target model on the server 120 based on the user input 112 and the prompt. The application 110 may send the alert information to the server 120 over the network 130. The language model on the server 120 analyzes the user input 112, the content of the prompt, and determines the style of the prompt, thereby generating code 122. In some embodiments, the language model exclusively performs the style orchestration process based on the hints information to process the hints information into various style view controls, without involving work beyond the style orchestration process, which facilitates the generation of code 122 for rendering at the user device. In some embodiments, the server 120 may send the code 122 to the application 110 on the user device over the network 130.
In some embodiments, the application 110 generates the target user interface 140 from the code 122, the target user interface 140 displaying a pattern of hints information. In some embodiments, the application 110 builds the code 122 based on a rendering frame on the user device, resulting in a build product corresponding to the rendering frame, wherein the resource file 114 including the water drop image representing humidity, the building image of landmark nature, the weather image, etc., is associated into a view control of the build product. The application 110 transmits the build product to the business logic of the application 110 itself through the interface, and the application 110 renders the build product, ultimately generating an intuitive and vivid target user interface 140, providing a good use experience for the user.
As shown in fig. 1, the target user interface 140 may vividly display weather information 144, average temperature information 142, representing sunny days, and also display the physical address "zone B" in bolded fashion, as well as time information, temperature range information, humidity information, and building images 146 of the landmark nature of "zone B". The user can intuitively see the local weather conditions and related information. In some embodiments, the application 110 verifies the code 122, such as replacing font styles that cannot be displayed by the user device with compatible font styles, ensuring that the code 122 can be rendered at the user device.
Compared with the traditional mode, the method disclosed by the invention does not need to manually pre-write a large number of display template codes in user equipment or application programs by a developer, and the method disclosed by the invention remarkably improves the development efficiency and simultaneously reduces the requirements on the technical level of the expertise of the developer by determining the prompt information and calling the target model to generate the codes aiming at the prompt information. In addition, the generated user interface can be dynamically adjusted according to user input, so that the content is rich, the display is attractive, and the user experience is greatly improved.
The network 130 has a theoretical bandwidth, which refers to the maximum transmission speed supported by the network 130, which represents the maximum amount of data that the network 130 can ideally transmit, typically measured in bits per second (bps). For example, if the theoretical bandwidth of network 130 is 100Mbps, it means that it may ideally transmit one hundred megabits of data per second. In practice, however, an actual transmission speed of 100Mbps may not be achievable due to other factors (e.g., signal interference, bandwidth sharing, transmission delay, etc.) that may be present in the network.
As understood by those of ordinary skill in the art, the server 120 may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, and basic cloud computing services such as big data and artificial intelligence platforms. The servers may be directly or indirectly connected via wired or wireless communication, and the application is not limited in this regard.
The user device may be any type of mobile computing device, including a mobile computer (e.g., personal digital assistant, laptop computer, notebook computer, tablet computer, netbook, etc.), a mobile phone (e.g., cellular phone, smart phone, etc.), a wearable computing device (e.g., smart watch, head-mounted device, including smart glasses, etc.), or other type of mobile device. In some embodiments, the user device may also be a stationary computing device, such as a desktop computer, a gaming machine, a smart television, or the like.
It should be understood that the architecture and functionality in the example environment 100 are described for illustrative purposes only and are not meant to suggest any limitation as to the scope of the disclosure. Embodiments of the present disclosure may also be applied to other environments having different structures and/or functions.
Processes according to embodiments of the present disclosure will be described in detail below in conjunction with other figures. For ease of understanding, specific data set forth in the following description are intended to be exemplary and are not intended to limit the scope of the disclosure. It will be appreciated that the embodiments described below may also include additional actions not shown and/or may omit shown actions, the scope of the present disclosure being not limited in this respect.
FIG. 2 illustrates a flow chart of a method 200 for generating a user interface according to some embodiments of the present disclosure. In this embodiment, the method may be performed by the application 110. At block 202, based on the user input, a prompt for the user input is determined. User input refers to instructions or request information entered by a user on the application 110, and may include text (e.g., query sentences entered in a search box), speech (text content converted by a speech recognition engine), gestures (specific swipe, click combination operations), etc., are initial signals that trigger subsequent processing flows. The prompt information pointer inputs or retrieves the relevant data set for solving the user's question and meeting the user's requirement to the user, and can cover various data types such as text, image, video, numerical value and the like. For example, when a user queries a tourist attraction, the prompt information may include attraction introduction text, live-action pictures, etc.
At block 204, code for a target user interface is obtained, the code generated by the target model based on user input and prompt information. The object model may be, for example, a language model based on a transducer architecture, with the ability to process and generate object user interface code based on user input and prompt information. In some embodiments, the target model is deployed in the user device. In some embodiments, the target model is deployed in a server. Because the prompt information is determined, the target model does not need to acquire other information, and the capability of the target model can be used for arranging the style of the prompt information, so that the generated codes are easier to render across ends.
At block 206, a target user interface is generated from the code, the target user interface displaying a pattern of the prompt information. The target User Interface (UI) is a visual Interface rendered on the User device by the application 110 according to the code generated by the target model, and presents the prompt information in an intuitive, attractive, and interactive friendly manner.
According to the method of the embodiment of the disclosure, the corresponding prompt information can be any data such as text and image by aiming at any user input, and the target model is called to generate codes aiming at the patterns of the prompt information according to the user input and the prompt information, so that codes for displaying templates can be dynamically generated, and the patterns for displaying templates and designing the prompt information do not need to be programmed in the user equipment in advance. The method also renders the code at the user device to generate a vivid user interface. This may thus improve development efficiency, lower technological thresholds, and improve user experience.
Fig. 3 shows a schematic diagram of a system architecture for generating a user interface according to an embodiment of the present disclosure. In some embodiments, user input 312 is received at user device 310, and an application on user device 310 may pass user input 312 through to a master module in software development kit 314 by invoking an interface of software development kit 314. These interfaces contain clear parameter definitions and return value conventions to ensure accuracy and stability of data transfer. The main module may select the target application programming interface from the application programming interface library according to the user input. For example, the user input is "tell me M team to N team game scores," and in some embodiments, the master module may access the data side 322 of the server 320 based on the hypertext transfer protocol using the target application programming interface, from which M team to N team game scores are obtained. Before the call, the main module can perform checksum pretreatment on parameters required by the API, so as to ensure that the types, formats and value ranges of the parameters meet the requirements. Taking the example of obtaining the match scores of the M team and the N team, the main module constructs a request message based on a hypertext transfer protocol by using a target application programming interface. The request message contains necessary request header information and request body data (such as parameters of the searched game session, time range and the like).
In some embodiments, the data end 322 of the server 320 retrieves the game score information from the database upon receiving the request. In the data retrieval process, for example, the techniques of index optimization, query caching and the like are adopted to improve the data query efficiency. In some embodiments, after the retrieved data is subjected to preprocessing operations such as data cleaning and format conversion, the retrieved data is packaged into a response message according to the hypertext transfer protocol specification, and the score information is sent to the main module. After the main module receives the response, the data are subjected to integrity check and error processing, and the accuracy of the prompt information is ensured.
In some embodiments, the master module may also obtain context information, which may be any information associated with the user input, such as a device model of the user device, a version of a rendering framework in the user device, a system version in the user device, a related description of the application, and so forth. In some embodiments, the main module may determine the target cue words based on user input, contextual information, cue information, and the view component. In some embodiments, the master module may obtain code, where the code is generated by the object model based on the object hint word. For example, the master module may send the target prompt to the large language model 324 of the server 320. After the large language model 324 generates the code, the code is sent by the server 320 to the main module in the software development kit 314 of the user device 310, for example, based on the hypertext transfer protocol. The master module may compile, modularize, etc. the code to obtain a build product. User device 310 may render the build product using the rendering framework to arrive at user interface 316. In the embodiment, on the input processing of a user, the user intention is quickly and accurately analyzed by using a lightweight language model, and prompt information is efficiently acquired by combining an API call specification and a local/remote data acquisition strategy, so that timeliness and accuracy of the information are guaranteed.
FIG. 4 shows a schematic diagram of the modules of a software development kit according to an embodiment of the present disclosure. In this embodiment, the user device 400 passes user input through to the main module 402. In some embodiments, the lightweight language model built into the main module 402 is pre-trained using large-scale natural language processing datasets during the training phase based on transducer architecture design, and then fine-tuned for user input intent analysis tasks. After the language model performs word segmentation, part-of-speech tagging, named entity recognition and other operations on the user input, the user input is mapped into a predefined intention category through an intention classifier by utilizing a classification algorithm (such as a multi-layer perceptron and a convolutional neural network) based on deep learning. According to the intention category, the main module 402 determines whether the target application programming interface in the application programming interface library 404 needs to be called according to a preset rule library. The rule base is constructed by adopting a decision tree or a rule engine-based mode, so that the high efficiency and accuracy of judgment logic are ensured. The application programming interface library 404 may be deployed with a number of application programming interfaces for the main module 402 to call.
In some embodiments, if the target application programming interface does not need to be invoked, the main module 402 may generate target instructions for retrieving hint information from the user device. For example, if the user input is to query for the current time, the master module 402 need not retrieve the time information, but need only retrieve the time information from the user device. In this case, the main module 402 may directly generate an instruction to acquire time information on the user device, and acquire the time information as the hint information. In some embodiments, the main module 402 may consider the time zone setting, time format preference, etc. of the device to personalize the configuration when generating the instruction, to ensure that the acquired time information meets the user's needs. The acquired time information is subjected to data formatting processing and unified into a designated data format, and the designated data format is stored in a memory buffer as prompt information to wait for subsequent processing. In some embodiments, if a target application programming interface needs to be invoked, the main module 402 may use the target application programming interface to obtain the hint information, where the hint information is retrieved from the data side by the target application programming interface.
In some embodiments, an application on user device 400 provides not only hint information to server 420, but also view components. The view component is a component for presenting hints information. After obtaining the user input, the main module 402 determines the appropriate view component type based on the user input. In some embodiments, the view component may be one of a pluggable card or page. In some embodiments, if the user input belongs to a first category, the insertable card is determined to be a view component that presents the hint information. For example, if less information is predicted to need to be presented, an insertable card may be used as a view component. In some embodiments, if the user input belongs to the second category, the page is determined to be a view component that presents the hint information. For example, if more information is predicted to need to be presented, the page may be considered a view component.
In some embodiments, the master module 402 employs a multi-dimensional decision strategy. In some embodiments, a mapping table of user input categories to view components is established, which is built through historical data statistics and user behavior analysis. After obtaining the user input, the main module 402 determines the amount of information to be presented, and also comprehensively considers factors such as complexity and relevance of the information to be presented. For information with high structuring degree and strong relevance (such as a plurality of attribute information of a commodity detail page), even if the information quantity is small, the page is prone to be selected as a view component, and for scattered and independent information (such as simple weather early warning prompt), the insertable card is preferentially selected. In some embodiments, the main module 402 may also refer to hardware parameters such as screen size, resolution, etc. of the user device during the view component selection process. For small screen devices (e.g., smartwatches), insertable cards or compact pages suitable for small screen display may be employed, even if the information is more. In some embodiments, the main module 402 may predefine an initial style of the view assembly, such as a corner radius of the card, a background color of the page, etc., according to the theme style of the application and the user's personalized settings.
In some embodiments, the main module 402 may inform the selected view component of the view module 406 in the software tool development kit. View module 406 may invoke another lightweight language model to generate specific view components, such as code to generate view components indicating the size, color, and other styles of view components. View module 406 may send the user input, view components, and hints information to server 420. If the view component generation fails, the view module 406 may send an exception report to the master module 402 via a communication channel with the master module 402. In some embodiments, the view module 406 may call some specific functions when generating the view component, and the view module 406 may inform the main module 402 of the called functions. In some embodiments, the view module 406 may make statistics about the data about which the view component was generated and inform the master module 402 of the statistics report via the communication channel. In this way, the master module 402 can master the events of the full life cycle of the view component, and conveniently and timely adjust the data and the strategy of the view component.
The server 420 has an interface that invokes the large language model 430, which can schedule requests for the user device 400 using the load balancer 422. When the server 420 sends a request from the user device 400 to the large language model 430, the large language model 430 may generate code appropriate for the view component based on the hint information, the user input, the view component, and send the code to the server 420, where the code indicates the style of the hint information in the view component. The server 420 may receive codes using the receiver 424. The server 420 may send the code to the user device 400. In some embodiments, the code generated by the large language model may not be compiled at the user device 400, e.g., code for nativewind styles may not be compiled in a device that is not equipped with a particular rendering framework, so the server 420 may use the compiler 426 to compile such a particular class of code ahead of time, i.e., to partially compile the code generated by the large language model 430, and to send the remaining code along with the compilation product to the user device 400.
The user device 400 may use the verifier 414 to verify the code. In some embodiments, the verifier 414 detects the code to determine the code for the pattern that the user device lacks. In some embodiments, the verifier 414 determines the replacement code from the code for the pattern that the user device lacks, wherein the replacement code is for the pattern that the user device possesses. In some embodiments, verifier 414 modifies the code using the alternative code to update the code. For example, verifier 414 systematically scans the object code based on static code analysis techniques. In some embodiments, the verifier 414 breaks the code into individual lexical units by a lexical analyzer, which builds an abstract syntax tree from the grammar rules of the programming language.
In some embodiments, verifier 414 has built into it a device style support database that stores style support information for user device 400 under the influence of multidimensional factors such as operating system, browser version, hardware characteristics, etc. For example, the operating system level records font rendering characteristics, color management schemes, etc. The verifier 414 compares the style attributes involved in the code with the device style support database row by row, and once the existence of a style in the code that cannot be supported by the device, such as a font that is incompatible with a particular version of browser, is found, the corresponding code segment can be precisely marked.
In some embodiments, user device 400 may adjust the received code to correspond to the rendering framework of user device 400, thereby facilitating cross-end rendering of the code. In some embodiments, verifier 414 employs a combination of rules-based and machine learning in determining the replacement code. On one hand, a complete style substitution rule base is preset in the rule base, and the rule base is jointly constructed by a large amount of equipment compatibility test data and industry standard specifications. For example, when it is detected that custom fonts are used in the code that are not supported by the device, the rule base may preferentially recommend similar fonts in the default font family of the device, which facilitates cross-end rendering of the code.
In some embodiments, user device 400 may use builder 416 to build code, e.g., compile the code, into binary code corresponding to the rendering framework of user device 400. The user device 400 may communicate the build product into the runtime environment 408, rendering the UI elements 412 (e.g., various view controls) therein with the application's rendering framework 410 to obtain a user interface. By constructing the code as binary code corresponding to the rendering framework of the user device 400, cross-end rendering of the code is facilitated.
In some embodiments, the generated user interface requires real-time updating of data, such as user input of game scores for M teams and N teams, which may change if the game is still in progress. In some embodiments, the master module 402 may periodically invoke the target application programming interface to obtain the latest game score and send the game score to the view module 406 via the communication channel. In some embodiments, the latest game score is used by the view module 406 to modify the code, enabling updating of the code. The user interface thus obtained may dynamically reflect the latest game score.
In the embodiment of fig. 4, when determining the view component, factors such as information characteristics, equipment parameters, user preferences and the like are comprehensively considered, and the view component is intelligently matched, so that the rationality of information display is improved. Therefore, the system architecture greatly improves the user interface generation efficiency, reduces the development threshold, reduces the manual coding workload, and the generated user interface is adaptive to multiple devices and multiple scenes, so that the user experience is obviously improved.
FIG. 5 shows a schematic diagram of data processing of a software development kit according to an embodiment of the present disclosure. In this embodiment, at 502, user input is obtained based on business logic of an application and passed through to a module of a software development kit. In some embodiments, an event-driven architecture is employed, and when a user input event triggers, the application program transparently passes the user input to the corresponding module of the SDK through a predefined interface, either in function call or message passing. At 504, user input is received by a module in the SDK. At 506, context information is acquired. The process of acquiring context information involves multi-dimensional data acquisition. In some embodiments, information such as device model, screen resolution, operating system version, hardware performance parameters, etc. is obtained from the user device level by calling an API. In some embodiments, data such as the current version number of the application, the user login status, the historical operating records, etc. is collected from the application level.
At 510, the data is retrieved according to the user input access data to obtain the prompt. For example, the user input is a query class instruction, such as "nearby coffee shops", and the module in the SDK may call the map API and the merchant database API to obtain related prompt information such as merchant name, address, score, and the like through latitude and longitude positioning and keyword searching. In the data retrieval process, a caching mechanism is adopted to check whether the data meeting the conditions exist in the local cache preferentially, if so, the data is directly read, and network requests are reduced.
A prompt for user input is received at 508. At 512, user input, contextual information, and prompt information are collated into a prompt. For example, the task type (such as "generate user interface code") is defined at the beginning of the prompt, then the user input content, key context information (such as device resolution, application theme) and prompt details are listed in turn, and the semantic expression is enhanced using specific separators and markup symbols to facilitate understanding of the large language model. At 514, a large language model is accessed. The large language model is used for arranging the patterns of the prompt information based on the prompt words and generating codes. After the large language model receives the prompt word, the style of the prompt information is arranged based on the pre-trained knowledge and the fine tuning parameters of the interface generation task. And analyzing the association of the information of each part in the prompt word by using an attention mechanism in deep learning, determining the style attributes such as the layout, the color, the font, the animation effect and the like of the view control, and generating codes according to the language specifications such as HTML, CSS, javaScript and the like.
At 516, the code is received based on a hypertext transfer protocol. At 518, the code is preprocessed, such as for style replacement, a device style compatibility rule base is built in, and styles (e.g., specific fonts, CSS properties) in the code that are not supported by the user device are replaced with compatible substitute styles. And path checking and correction can be performed on resource references (such as images and script files) in the codes, so that the resources can be loaded correctly. At 520, the build product for the user interface according to the code. At 522, the build product is rendered in conjunction with business logic to generate and display a user interface. In some embodiments, if the view component is an pluggable card, the build product is rendered as a view node using a rendering framework as a target user interface. Further, in some embodiments, if the view component is an pluggable card, the target user interface is inserted into the view tree of the current page, and the inserted view tree is displayed. In this way, the insertable card may be displayed directly in the current interface of the application, thereby improving the user experience.
In some embodiments, if the view component is a page, the build product is rendered as a view tree using a rendering framework as the target user interface. Further, in some embodiments, if the view component is a page, the current page is overlaid with the target user interface, and the target user interface is displayed. When the view component is a page, the rendering framework can adopt a rendering strategy of whole page replacement, and analyze page-level view description information in the construction product, including page layout structure, style sheets, script files and the like. The rendering framework may create a completely new virtual DOM tree representing the entire target user interface.
In some embodiments, the rendering framework may perform performance monitoring after the operation is complete, whether it is card insertion or page overlay. And evaluating the rendering process by collecting key performance indicators. If the performance problem is found, the rendering strategy can be automatically adjusted, such as optimizing DOM operation sequence, reducing unnecessary re-rendering, and the like, so as to ensure efficient display of the user interface.
In this embodiment, when determining the prompt word, the user inputs an explicit demand direction, context information (device model, system version, network environment, etc.) provides a scene background, and the prompt information is a content body, so that the limitation of single information is avoided, the language model can accurately understand the intention of the user, the accuracy and the integrity of the prompt information are ensured, and information mistakes are avoided.
FIG. 6 illustrates a schematic diagram of generating a hint word according to embodiments of the present disclosure. In this embodiment, at 602, a user intent is determined by analyzing user input. For example, when the user inputs "how today's weather" it may be determined that the user wants to know about the weather conditions on the local current date. In some embodiments, user input is segmented, part-of-speech tagged, and syntactic structural parsed to define basic composition and structural relationships of the sentence. For example, for a user to input "how today's weather", split it into parts of speech elements of "today" (time-like language), "weather" (subject), "how" (doubtful pronouns), etc., and recognize that this is a question asking the current weather condition.
In some embodiments, the text prompt is determined based on user input. At 604, auxiliary information required to solve the user's intent, such as "region B", "2025-05-20" in FIG. 1, is retrieved according to the user's intent. If the user intends to inquire about the weather of a specific area, the user equipment can search for auxiliary information such as the area name, inquiry time and the like, and the information can help the user equipment to obtain an answer more accurately. At 606, answer information is retrieved based on the auxiliary information and the user intent. And finding answer content really needed by the user from various data sources by utilizing the acquired auxiliary information and the user intention. For example, the temperature data at the corresponding moment is found by combining the region and the time, and a specific information result is provided for the user. For example, the temperature of region B at 2025-05-20 is 25℃as obtained from the weather API, which is taken as answer information.
In some embodiments, the image hint information is determined from the text hint information. At 608, image information, such as weather 144 and building image 146 in FIG. 1, is acquired with reference to the answer information and the auxiliary information. For example, based on weather conditions (e.g., sunny days, rainy days) and regional characteristics (e.g., buildings), corresponding weather icons and regional representative building images are retrieved. Such a building image 146 may intuitively tell the user what area of weather was reported. At 610, context information is obtained from user input. The context information may be various information such as configuration information of the user equipment discussed above. In some embodiments, the text prompt and the image prompt are determined to be prompts. At 612, a hint word is generated based on the user input, the auxiliary information, the answer information, the context information, the image information. In this embodiment, the key content is integrated from multiple data sources according to the intent retrieval auxiliary information, answer information and image information, ensuring that the information is comprehensive and accurate and providing rich and detailed results to the user.
In some embodiments, a second prompt for user input is determined based on the user input. For example, when the user inputs "query weather" again, the previous prompt may be the basic information of the current temperature, weather conditions, etc., while the second prompt may be further extended to more detailed contents of air quality index, future 24-hour weather forecast trend, etc. In some embodiments, the content of the second reminder information is dynamically adjusted in conjunction with the user's historical interaction record and usage preferences. Supplemental information that may be of interest to a user is determined by analyzing the frequency and residence time of past accesses to different types of information by the user. For example, if the user frequently focuses on the ultraviolet intensity data, the content is preferentially included in the second presentation information.
In some embodiments, a second code for a second user interface is obtained, wherein the second code is generated by the object model based on the user input and the second prompt. In some embodiments, a second user interface is generated from the second code, wherein the second user interface is different from the target user interface. For example, for a second user interface, a different attribute configuration may be set for the view control. For example, interactive functions such as zooming and dragging are added for the chart type data display component, the data visualization effect is improved, folding and unfolding buttons are added for long text content, and the space utilization rate is improved. Since the prompt messages generated each time in this embodiment are not necessarily identical, and the codes indicating the styles of the prompt messages are not necessarily identical, even if the same user input is repeatedly input, different user interfaces are generally obtained.
FIG. 7 illustrates an effect diagram for generating a user interface according to an embodiment of the present disclosure. In business application A710, the user may provide user input 712, "how weather is. In the case where the user input 712 is provided for the first time, an insertable card 714 may be displayed. In the insertable card 714, the business application A710 provides the user's geographic location information "region B" which is bolded and displayed in a position in the upper left hand corner of the insertable card. The insertable card 714 also displays an average temperature "20 °", date, temperature range "12 ° -26 °", humidity "51%" and an image representing humidity, as well as a building image 716 of the landmark nature of region B.
If the user again provides user input 712 "how weather is," insertable card 720 may be displayed. The pluggable card 720 also shows weather information, but the pattern of the reminder information is different as compared to the pluggable card 714. For example, the humidity information and the image representing the humidity are not shown, the building image 716 of landmark nature is not shown, and instead the happy expression 722 corresponding to the weather "sunny".
If the user provides the user with the input "how the game scores for M and N teams are," then an insertable card 730 may be displayed. The insertable card 730 shows the game as basketball score, team as M and N teams, respectively, and is assigned the team logo of the basketball team. The insertable card 730 also shows a game score of 110 to 104 for the last game (i.e., 2025, 5, 20 days) and indicates that the game has ended. In the embodiment, the user provides the same or different user inputs, and the user interface aiming at the user inputs can be dynamically generated, so that prompt information aiming at the user inputs is vividly displayed, and the user experience is improved.
Fig. 8 illustrates a schematic block diagram of an apparatus 800 for generating a user interface according to some embodiments of the present disclosure. The means 800 for generating a user interface may be implemented in software, hardware or a combination of both. As shown in fig. 8, the apparatus 800 includes a hint information determining module 810, a code acquisition module 820, and a user interface generating module 830.
In some embodiments, where the hint information determination module 810 may be configured to determine hint information for user input based on user input. The code acquisition module 820 may be configured to acquire code for a target user interface, the code generated by the target model based on user input and prompt information. The user interface generation module 830 may be configured to generate a target user interface according to the code, the target user interface displaying a style of the prompt information.
In some embodiments, the hint information determining module 810 includes a first determining module configured to determine whether the target application programming interface needs to be invoked by analyzing the intent of the user input, a first instruction generating module configured to generate a target instruction in response to not needing to invoke the target application programming interface, wherein the target instruction is used to obtain hint information from the user device, and a hint information obtaining module configured to obtain hint information using the target application programming interface in response to needing to invoke the target application programming interface, wherein the hint information is retrieved by the target application programming interface from the data side.
In some embodiments, the prompt determination module 810 includes a second determination module configured to determine text prompts based on user input, a third determination module configured to determine image prompts based on the text prompts, and a fourth determination module configured to determine the text prompts and the image prompts as prompts.
In some embodiments, the apparatus 800 further includes a fifth determination module configured to determine a category of user input by analyzing the intent of the user input, a sixth determination module configured to determine an insertable card as a view component exhibiting the hint information in response to the user input belonging to the first category, and a seventh determination module configured to determine a page as a view component exhibiting the hint information in response to the user input belonging to the second category.
In some embodiments, the code acquisition module 820 includes a second acquisition module configured to acquire context information for user input, an eighth determination module configured to determine a target prompt from the user input, the context information, the prompt, and the view component, and a third acquisition module configured to acquire code, wherein the code is generated by the target model based on the target prompt.
In some embodiments, wherein the third acquisition module includes a sending module configured to send a call request to the server, wherein the call request includes a target hint word, the target hint word being sent to the target model via the server, and a receiving module configured to receive the code from the server via the server.
In some embodiments, the code indicates a style of the hint information in the view component.
In some embodiments, the user interface generation module 830 includes an adjustment module configured to adjust code to code corresponding to a rendering framework, a build module configured to build the adjusted code to obtain a build product, and a first rendering module configured to render the build product using the rendering framework to obtain a target user interface.
In some embodiments, the first rendering module includes a second rendering module configured to render the build product as a view node using the rendering framework as a target user interface in response to the view component being an pluggable card, and a third rendering module configured to render the build product as a view tree using the rendering framework as a target user interface in response to the view component being a page.
In some embodiments, the second rendering module includes an insertion module configured to insert the target user interface into the view tree of the current page in response to the view component being an insertable card, and a first display module configured to display the inserted view tree.
In some embodiments, wherein the third rendering module includes an overlay module configured to overlay the current page with the target user interface in response to the view component being a page, and a second display module configured to display the target user interface.
In some embodiments, the apparatus 800 further comprises a detection module configured to detect a code to determine a code for a pattern lacking in the user device, a ninth determination module configured to determine a replacement code from the code for the pattern lacking in the user device, wherein the replacement code is for a pattern possessed by the user device, and a modification module configured to modify the code using the replacement code to update the code.
In some embodiments, the apparatus 800 further comprises a tenth determination module configured to determine a second hint information for the user input based on the user input, a fourth acquisition module configured to acquire a second code for a second user interface, wherein the second code is generated by the object model based on the user input and the second hint information, and a second generation module configured to generate the second user interface based on the second code, wherein the second user interface is different from the object user interface.
According to the device for generating the user interface, through providing rich prompt information, the target model is called to generate the code aiming at the pattern of the prompt information according to user input and the prompt information, so that the code for displaying the template can be dynamically generated, the generated code is transmitted to the user equipment, the vivid and visual user interface is quickly rendered through a series of verification, optimization and drawing operations by means of the rendering frame on the equipment, the pattern for displaying the template and designing the prompt information does not need to be programmed in the user equipment in advance, the development period is obviously shortened, and even a developer with weaker technical foundation can easily realize the generation of the complex interface. In addition, the generated user interface can be dynamically adjusted according to user input, so that the content is rich, the display is attractive, and the experience of a user in the application program using process is greatly improved.
The division of the modules or units in the embodiments of the disclosure is schematically only one logic function division, and there may be another division manner in actual implementation, and in addition, each functional unit in the disclosed embodiments may be integrated in one unit, or may exist alone physically, or two or more units may be integrated into one unit. The integrated units may be implemented in hardware or in software functional units.
FIG. 9 illustrates a block diagram of an example device 900 that can be used to implement embodiments of the present disclosure. It should be understood that the apparatus 900 illustrated in fig. 9 is merely exemplary and should not be construed as limiting the functionality and scope of the implementations described herein. For example, device 900 may correspond to the user device described herein in connection with fig. 1 and may be used to perform the processes of fig. 1-7 described above. As another example, device 900 may correspond to the electronic device of the third aspect of the summary section.
As shown in fig. 9, device 900 is in the form of a general purpose computing device. Components of device 900 may include, but are not limited to, one or more processors or processing units 910, memory 920, storage 930, one or more communication units 940, one or more input devices 950, and one or more output devices 960. The processing unit 910 may be an actual or virtual processor and is capable of performing various processes according to programs stored in the memory 920. In a multiprocessor system, multiple processing units execute computer-executable instructions in parallel to improve the parallel processing capabilities of device 900.
Device 900 typically includes a number of computer storage media. Such a medium may be any available medium that is accessible by device 900, including, but not limited to, volatile and non-volatile media, removable and non-removable media. The Memory 920 may be volatile Memory (e.g., registers, cache, random access Memory (Random Access Memory, RAM)), non-volatile Memory (e.g., read Only Memory (ROM), electrically erasable programmable Read Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY, EEPROM), flash Memory), or some combination thereof. Storage device 930 may be a removable or non-removable media and may include machine-readable media such as flash drives, magnetic disks, or any other media that may be capable of storing information and/or data (e.g., training data for training) and may be accessed within device 900.
Device 900 may further include additional removable/non-removable, volatile/nonvolatile storage media. Although not shown in fig. 9, a magnetic disk drive for reading from or writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data medium interfaces. Memory 920 may include a computer program product 925 having one or more program modules configured to perform various methods or acts of various implementations of the disclosure.
Communication unit 940 enables communication with other computing devices via a communication medium. Additionally, the functionality of the components of device 900 may be implemented in a single computing cluster or in multiple computing machines capable of communicating over a communications connection. Thus, the device 900 may operate in a networked environment using logical connections to one or more other servers, a network personal computer (PersonalComputer, PC), or another network node.
The input device 950 may be one or more input devices such as a mouse, keyboard, trackball, etc. The output device 960 may be one or more output devices such as a display, speakers, printer, etc. Device 900 may also communicate with one or more external devices (not shown), such as storage devices, display devices, etc., with one or more devices that enable a user to interact with device 900, or with any device (e.g., network card, modem, etc.) that enables device 900 to communicate with one or more other computing devices, as desired, via communication unit 940. Such communication may be performed via an Input/Output (I/O) interface (not shown).
According to an example implementation of the present disclosure, a computer-readable storage medium having stored thereon computer-executable instructions that are executed by a processor to implement the method described above is provided. According to an example implementation of the present disclosure, there is also provided a computer program product tangibly stored on a non-transitory computer-readable medium and comprising computer-executable instructions that are executed by a processor to implement the method described above. According to an example implementation of the present disclosure, a computer program product is provided, on which a computer program is stored which, when being executed by a processor, implements the method described above.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus, devices, and computer program products implemented according to the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various implementations of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing has described implementations of the present disclosure, the foregoing description is exemplary, not exhaustive, and not limited to the implementations disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various implementations described. The terminology used herein was chosen in order to best explain the principles of each implementation, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand each implementation disclosed herein.
Claims (16)
1. A method for generating a user interface, comprising:
Determining prompt information input by a user according to the user input;
Acquiring code for a target user interface, the code generated by a target model based on the user input and the prompt information, and
And generating the target user interface according to the codes, wherein the target user interface displays the pattern of the prompt information.
2. The method of claim 1, wherein determining, from user input, hint information for the user input comprises:
Determining whether a target application programming interface needs to be invoked by analyzing the intent of the user input;
generating a target instruction in response to not requiring the invoking the target application programming interface, wherein the target instruction is for retrieving the hint information from the user device, and
And responding to the requirement of calling the target application programming interface, and acquiring the prompt information by using the target application programming interface, wherein the prompt information is retrieved from a data end by the target application programming interface.
3. The method of claim 1, wherein determining, from user input, hint information for the user input comprises:
determining text prompt information according to the user input;
Determining image prompt information according to the text prompt information, and
And determining the text prompt information and the image prompt information as the prompt information.
4. The method of claim 1, further comprising:
Determining a category of the user input by analyzing the intent of the user input;
determining an insertable card as a view component for presenting the hint information in response to the user input belonging to a first category, and
And determining the page as a view component for displaying the prompt information in response to the user input belonging to the second category.
5. The method of claim 4, wherein obtaining code for a target user interface comprises:
acquiring context information input by the user;
Determining a target prompt word according to the user input, the context information, the prompt information and the view component, and
The code is obtained, wherein the code is generated by the object model based on the object prompt.
6. The method of claim 5, wherein obtaining the code comprises:
Sending a call request to a server, wherein the call request includes the target prompt, the target prompt being sent to the target model via the server, and
The code is received from the server via the server.
7. The method of claim 5, wherein the code indicates the style of the hint information in the view component.
8. The method of claim 4, wherein generating the target user interface from the code comprises:
Adjusting the code to a code corresponding to a rendering frame;
building the adjusted code to obtain a build product, and
Rendering the build product using the rendering framework to obtain the target user interface.
9. The method of claim 8, wherein rendering the build product using the rendering framework to obtain the target user interface comprises:
rendering the build product as a view node using the rendering framework as the target user interface in response to the view component being the pluggable card, and
In response to the view component being the page, the build product is rendered as a view tree using the rendering framework as the target user interface.
10. The method of claim 9, wherein rendering the build product as a view node using the rendering framework in response to the view component being the pluggable card comprises:
inserting the target user interface into a view tree of a current page in response to the view component being the pluggable card, and
Displaying the inserted view tree.
11. The method of claim 9, wherein rendering the build product as a view tree using the rendering framework in response to the view component being the page comprises:
In response to the view component being the page, overlaying a current page with the target user interface, and
And displaying the target user interface.
12. The method of claim 8, further comprising:
Code that detects the code to determine a pattern for the lack of user equipment;
Determining a substitute code according to the code of the pattern lacking for the user equipment, wherein the substitute code is specific to the pattern possessed by the user equipment, and
The code is modified using the substitute code to update the code.
13. The method of claim 1, further comprising:
determining second prompt information aiming at the user input according to the user input;
Acquiring a second code for a second user interface, wherein the second code is generated by the object model based on the user input and the second prompt, and
And generating the second user interface according to the second code, wherein the second user interface is different from the target user interface.
14. An apparatus for generating a user interface, comprising:
the prompt information determining module is configured to determine prompt information input by a user according to the user input;
a code acquisition module configured to acquire a code for a target user interface, the code being generated by the target model based on user input and prompt information, and
And the user interface generation module is configured to generate a target user interface according to the codes, and the target user interface displays the pattern of the prompt information.
15. An electronic device, comprising:
at least one processing unit;
At least one memory coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions when executed by the at least one processing unit cause the electronic device to perform the method of any one of claims 1-13.
16. A computer program product having a computer program stored thereon, which, when executed by a processor, implements the method according to any of claims 1 to 13.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510772927.5A CN120540654A (en) | 2025-06-10 | 2025-06-10 | Method, apparatus, device, medium and program product for generating user interface |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510772927.5A CN120540654A (en) | 2025-06-10 | 2025-06-10 | Method, apparatus, device, medium and program product for generating user interface |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120540654A true CN120540654A (en) | 2025-08-26 |
Family
ID=96788995
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510772927.5A Pending CN120540654A (en) | 2025-06-10 | 2025-06-10 | Method, apparatus, device, medium and program product for generating user interface |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120540654A (en) |
-
2025
- 2025-06-10 CN CN202510772927.5A patent/CN120540654A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| KR102565455B1 (en) | Domain-specific language interpreter and interactive visual interface for rapid screening | |
| US11748557B2 (en) | Personalization of content suggestions for document creation | |
| JP2022003512A (en) | Method and apparatus for constructing quality evaluation model, electronic device, storage medium, and computer program | |
| TWI661349B (en) | Method and system for generating conversational user interface | |
| JP2018097846A (en) | Api learning | |
| CN109710250B (en) | Visualization engine system and method for constructing user interface | |
| KR102682244B1 (en) | Method for learning machine-learning model with structured ESG data using ESG auxiliary tool and service server for generating automatically completed ESG documents with the machine-learning model | |
| CN113419711A (en) | Page guiding method and device, electronic equipment and storage medium | |
| KR20220009338A (en) | Configuration method, device, electronic equipment and computer storage medium of modeling parameters | |
| US20210390258A1 (en) | Systems and methods for identification of repetitive language in document using linguistic analysis and correction thereof | |
| CN118502857A (en) | Interactive processing method, device, equipment, medium and program product of user interface | |
| EP4550103A1 (en) | Parallel interaction interface for machine learning models | |
| CN114444487A (en) | A data processing method, device, equipment and medium | |
| US20250110705A1 (en) | Fill-in-the-middle (‘fim’) code completions using static code analysis | |
| CN111813816B (en) | Data processing method, device, computer readable storage medium and computer equipment | |
| CN114331932B (en) | Target image generation method and device, computing device and computer storage medium | |
| CN120540654A (en) | Method, apparatus, device, medium and program product for generating user interface | |
| Phang | Mastering Front-End Web Development (HTML, Bootstrap, CSS, SEO, Cordova, SVG, ECMAScript, JavaScript, WebGL, Web Design and many more.): 14 Books in 1. Introducing 200+ Extensions. An Advanced Guide. | |
| CN111125587A (en) | Webpage structure optimization method, device, equipment and storage medium | |
| Nebeling | Lightweight informed adaptation: Methods and tools for responsive design and development of very flexible, highly adaptive web interfaces | |
| US20240126978A1 (en) | Determining attributes for elements of displayable content and adding them to an accessibility tree | |
| JP7541286B1 (en) | Information processing method, program, and information processing system | |
| US20250138818A1 (en) | Efficient generation of code development summaries | |
| CN120669881A (en) | Method, apparatus, device, medium, and program product for generating an object | |
| CN120806163A (en) | Method, apparatus, device, medium, and program product for providing a response |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |