CN117708347B - Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint - Google Patents
Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint Download PDFInfo
- Publication number
- CN117708347B CN117708347B CN202311717511.0A CN202311717511A CN117708347B CN 117708347 B CN117708347 B CN 117708347B CN 202311717511 A CN202311717511 A CN 202311717511A CN 117708347 B CN117708347 B CN 117708347B
- Authority
- CN
- China
- Prior art keywords
- data
- large model
- image
- api
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Animal Behavior & Ethology (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application relates to the technical field of data fusion, in particular to a large model output multi-mode result method based on an API (application program interface) endpoint. The user does not need to have professional drawing knowledge, and through interaction with the large model, customized pictures can be generated according to own requirements. Especially in the picture fine tuning stage, the real-time preview function of the large model provides more visual and convenient interaction experience for users, so that the users can control the drawing process more finely, and meanwhile, the analysis of the data can be carried out by means of the analysis capability of the large model.
Description
Technical Field
The invention relates to the technical field of data fusion, in particular to a large model output multi-mode result method based on an API (application program interface) endpoint.
Background
The knowledge graph generation method is to fuse data of different modes into the knowledge graph, and the multi-mode data is realized by associating the multi-mode data with entities, attributes and relations in the graph. For example, associating an image with an entity in a map and storing image features as entity attributes. Features are extracted from multimodal data to enrich information in the atlas by using techniques such as computer vision, natural language processing, and audio processing to extract meaningful features of images, text, or audio. In the knowledge graph, the multi-mode data is combined with the structure of the knowledge graph through associating and linking the entities, the attributes and the relations of the different-mode data. In this way, entities in the atlas may contain multi-modal data at the same time, and associations and relationships between different modalities may be modeled and represented.
The transform-based model may be used to generate multi-modal results, and although originally designed for natural language processing tasks, has been extended for use in multi-modal scenarios such as images, audio and video. The transducer uses a self-attention mechanism to capture the context in the input sequence. It encodes the inputs for global context awareness by calculating the associated weights between each input location and the other locations. The encoder is responsible for encoding the input sequence, and the decoder generates an output sequence based on the output of the encoder and the context information. The transducer captures different semantic information by introducing multiple attention headers. Each attention header may focus on a different part of the input sequence and provide a representation of the features of the diversity. To preserve position information in the sequence, a transducer introduces position coding, embedding each position in the sequence into the feature representation. In generating the multi-modality results, a transducer may be applied to the data encoding and decoding process of each modality. For example, for a multimodal generation task of images and text, the images and text may be taken as input sequences, respectively, and encoded and decoded using a transducer model to generate a multimodal result.
The generation of multi-modal results of existing knowledge-graphs depends on the integrity and quality of the input data. If the coverage of the multimodal data is not wide or if the data is noisy or erroneous, the accuracy and integrity of the generated multimodal results may be compromised. In knowledge graph, associating and linking multimodal data with entities, attributes, and relationships is a critical step. However, for complex multimodal data, there may be difficulties in making accurate associations and links, especially when the amount of data is large, semantic similarity is low, or heterogeneous data sources are present.
Disclosure of Invention
In view of the above problems of the existing knowledge graph generation of multi-modal results, the invention provides a method for outputting multi-modal results based on a large model of an API endpoint, which comprises the following specific processes: acquiring an input text, classifying the sensitivity level of the input text, and creating a drawing option API endpoint according to the sensitivity level; calling a drawing option API through the large model to draw pictures; acquiring an adjustment text, transmitting the adjustment text to an API endpoint of a drawing option by the large model, and drawing the picture again; and the large model logically infers the input text and the adjustment text, and outputs a functional result by combining the generated pictures.
Creating a drawing option API endpoint specifically includes: 1) Selecting a back-end framework and constructing an API; the back-end framework can select any one of Django, flash, express. Js, ruby on Rails, spring Boot and ASP. NET; 2) Defining routes that include one or more routes to process the image generation requests, and each route will correspond to a different endpoint, each endpoint for a different type of image generation request; 3) And analyzing the request parameters sent from the front end, and generating the image in a self-defining way according to the parameters.
The large model transmitting the adjustment text to the drawing option API endpoint specifically includes: 1) Analyzing request parameters sent from the front end; the parameters include color, location, title; 2) Generating custom images according to the parameters; 3) Selecting different drawing libraries according to the analyzed parameters, generating images by using the selected drawing libraries, and storing the images; the URL of the custom generated image or the image data is returned to the front end as a response; if the image is stored on the server, returning to the URL of the image; if the image data needs to be embedded in the response, the image data can be encoded into a Base64 format and returned in a JSON response;
In front-end use, the back-end API endpoint may be invoked by initiating a POST request, and receive the image URL or image data in the response, and then display the image on the user interface.
The sensitivity level of the classified input text comprises: the integrated sensitivity score is calculated using a weighted average method: comprehensive sensitivity score= (PII number score x weight 1) + (field sensitivity score x weight 2) + (data access score x weight 3). Comprehensive sensitivity score= (PII number score x weight 1) + (field sensitivity score x weight 2) + (data access score x weight 3)
Wherein PII is a number score, range: 0 to 1,1 means that all data contains PII; PII represents information that can be used to uniquely identify, contact or locate a person's identity; field sensitivity score, range: 0 to 1,1 means that all fields are highly sensitive; data access score, range: 0 to 1,1 means that data is often widely accessed; weight 1, weight 2, weight 3) are defined according to the needs and policies of the organization, which are used to determine the importance of different indicators; the score ranges were divided into four classes of non-sensitive (0-0.3), low sensitive (0.31-0.6), medium sensitive (0.61-0.8), high sensitive (0.81-1).
The application realizes the conversion of text description and selected options input by a user and actual images. The user does not need to have professional drawing knowledge, and through interaction with the large model, customized pictures can be generated according to own requirements. Especially in the picture fine tuning stage, the real-time preview function of the large model provides more visual and convenient interaction experience for users, so that the users can control the drawing process more finely, and meanwhile, the analysis of the data can be carried out by means of the analysis capability of the large model.
The beneficial effects of the application are as follows: by utilizing large models and multimodal data, information of multiple data types (e.g., images, text, audio, etc.) can be synthesized. This allows the results to be more rich, diversified, and capable of fully presenting data from different perspectives. Enhancement of modality fusion capability: the multi-modal result is generated by combining the large model with the API endpoint, so that the characteristic information of different modal data can be fused better. The large model has strong feature extraction and expression capability, and can better capture the association and shared information between the modes, thereby improving the effect of mode fusion. Accelerating the generation process: by using the API endpoint to generate a result, the computing resource and the parallel processing capability of the cloud computing platform can be fully utilized, and the generation process is accelerated. Thus, the multi-mode result can be obtained more quickly, and the generation efficiency and the real-time performance are improved.
Drawings
FIG. 1 is a schematic diagram of an example interface;
FIG. 2 is a data user feedback schematic;
FIG. 3 is a schematic drawing of a picture;
FIG. 4 is a diagram illustrating fine tuning of a picture.
Detailed Description
In order to make the technical scheme of the present application better understood by the person skilled in the art, the following are combined with
The figures and the preferred embodiments illustrate the invention in further detail.
Step one: a user interface is constructed. Providing an input interface for a user, wherein the input interface comprises a text input interface and is responsible for accepting text or digital input of the user; selecting the options interface, the user may select a specific type of drawing picture, such as a histogram, a line graph, a pie chart, and so on. An example of an interface is shown in fig. 1.
Step two: and (5) feeding back data users. If the user has directly entered data in the text input interface, step three is performed. If the user does not input specific data, after the user inputs the problem, the large model displays the queried data to the user, so that the user determines to select the data to use for drawing, and the user inputs the selected data through the text input interface. This process may be repeated until the user selects a particular data.
As an example, a user enters at a text input interface: the large model of "plotting concentration change of Beijing PM 2.5" gives the Beijing PM2.5 concentration for approximately 7 days and the PM2.5 concentration for approximately 24 hours a day. The user selects the Beijing PM2.5 concentration of approximately 7 days as the drawing data, and inputs "the Beijing PM2.5 concentration of approximately 7 days is drawn" in the text input interface.
Meanwhile, the sensitivity level of the data is classified according to the specific data input by the user, and the sensitivity level of the data is considered when the data is inquired. By assigning different sensitivity levels to different types of data, the system can restrict the user's access rights to particular sensitive data based on these levels to ensure that sensitive information is not accessed by unauthorized users. Wherein, data classification and sensitivity level are defined as: first, the system needs to sort the data and assign a corresponding sensitivity level to each category. The classification of data may be based on the content, type, privacy properties, etc. of the data. For example, personal identity information, financial data, and medical records are generally considered highly sensitive data, while published news articles or general statistics may be very low sensitive data.
Second, a system administrator or data manager may define individual sensitivity levels and ensure that all relevant personnel know the meaning of these levels. Typically, sensitivity levels are divided into multiple layers, for example: non-sensitive, low sensitive, medium sensitive, high sensitive, etc. When defining sensitivity levels, legal regulations, industry standards, and internal policies of the organization need to be considered to ensure compliance. The system needs to implement strict access control strategies, and limits the access rights of users to data with different sensitivity levels according to the identity, roles and requirements of the users. Ensuring that a user must log into the system through valid authentication (e.g., user name and password, multi-factor authentication, etc.).
Users are assigned to different roles or sets of permissions, each role having a particular data access permission. For example, a non-sensitive data role, a low sensitive data role, a medium sensitive data role, and a high sensitive data role may be created. When a user initiates a query request, the system may determine whether the user has permission to query data of a particular sensitivity level based on the user's role and the integrated sensitivity score. If the user's level meets the level requirements of the data, the query is allowed. Otherwise, the query request is denied.
Sensitivity calculation:
for a set of data, sensitivity calculations are performed considering the following 3 points.
(1) PII number score (range: 0 to 1,1 indicates that all data contains PII).
(2) The field sensitivity score (range: 0 to 1,1 indicates that all fields are highly sensitive).
(3) Data access scores (range: 0 to 1,1 indicates that data is frequently accessed extensively).
The integrated sensitivity score is calculated using a weighted average method:
comprehensive sensitivity score= (PII number score x weight 1) + (field sensitivity score x weight 2) + (data access score x weight 3). Comprehensive sensitivity score= (PII number score x weight 1) + (field sensitivity score x weight 2) + (data access score x weight 3)
PII (Personally Identifiable Information, personal identification information) refers to information that can be used to uniquely identify, contact, or locate a person.
In this formula, weights (weight 1, weight 2, weight 3) are defined according to the needs and policies of the organization for determining the importance of different metrics. For example, if the number of PIIs is critical to the organization, the weight 1 may be set relatively high.
The score ranges were divided into four classes of non-sensitive (0-0.3), low sensitive (0.31-0.6), medium sensitive (0.61-0.8), high sensitive (0.81-1).
Step three: a drawing option API endpoint is created. And designing an API endpoint of the drawing option according to the provided selection option. Parameters such as colors, object positions, picture titles, coordinate axis titles and the like used for drawing pictures are contained in the drawing option API end points and are used for fine adjustment of subsequent pictures.
Selecting a back end frame: first, a back-end framework is selected to build the API. Common choices include Django, flash (Python), express.js (node. Js), ruby on Rails (Ruby), spring Boot (Java), ASP.NET (C#), and the like.
Defining a route: one or more routes are defined to process the image generation request, such as @ app. Route ('/generate_character', methods= [ 'POST' ]). These routes will correspond to different endpoints, each for a different type of image generation request. In the processing function, the request parameters sent from the front end are parsed. These parameters will include information such as color, location, title, etc., as acquired 'color', 'position', from which the image is custom generated. Based on the parsed parameters, images are generated using selected chart libraries (e.g., matplotlib, plotly, D3.Js, etc.). Different chart libraries have different generation methods and APIs, so images are generated according to the selection.
Example defined with a route in Flask
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route('/generate_chart', methods=['POST'])
def generate_chart():
# Processing image generation request
Parameters in the# resolution request
data = request.json
data = data['x','y']
color = data['color']
position = data['position']
Other parameter resolution
Generation of images using a gallery
# Use Matplotlib example here
import matplotlib.pyplot as plt
Code for # generating image
plt.plot(data['x','y'], color=color)
plt.title(data['title'])
plt.xlabel(data['x_axis_title'])
plt.ylabel(data['y_axis_title'])
# Save image to server
plt.savefig('chart.png')
plt.close()
Return image URL or image data as response
Return jsonify({'image_url': 'http://example.com/chart.png'})
Saving an image: once the image is generated, it is saved to a designated directory on the server for subsequent access by a URL, such as 'http:// sample. Com/character. Png'.
And returning the URL of the generated image or the image data to the front end as a response. If the image is stored on the server, returning to the URL of the image; if it is desired to embed image data in the response, the image data can be encoded into Base64 format and returned in the JSON response. In the front-end application, the back-end API endpoint is invoked by initiating a POST request, and receives the image URL or image data in the response, and then displays the image on the user interface.
Step four: and (5) drawing pictures. And calling a drawing option API through the large model to draw the picture.
The user selects Beijing PM2.5 concentration of nearly 7 days, and selects a line graph at the selection option interface, and the large model calls a line graph drawing option API to draw a picture.
Step five: and (5) fine-tuning the picture. The user inputs parameters such as colors, object positions and the like of the pictures to be adjusted to the large model through the text input interface, and the large model transmits the parameters to the drawing option API endpoint to draw again and present the parameters to the user. The user may make further adjustments based on the live preview until satisfied.
User input: the color of the line is red, the icon is entitled "Beijing PM2.5 concentration Change approximately 7 days", the abscissa is entitled "time", and the ordinate is entitled "concentration". The large model calls the drawing option API again to draw the picture.
Step six: and (5) data analysis. The large model can logically infer data input by a user, combine the data with knowledge, and output functional results by combining the generated pictures.
The large model analyzes the data trend of 7 days
It can be seen that there was some fluctuation in PM2.5 concentration over nearly 7 days, but there was no significant continuous rise or fall trend. The data varied between 32 and 67.
Average concentration: the mean PM2.5 concentration for the near 7 days is: (45+34+32) +56+67+ 47+48)/7=44.57.
Peak and valley: the data shows that the peak PM2.5 concentration is 67 and appears at 2023-08-24 during the 7 days. And the lowest value is 32, which occurs at 2023-08-22.
Volatility: the amplitude of the change in the data is relatively small, and the difference between the maximum and minimum values is 67-32=35, indicating that the fluctuation in PM2.5 concentration during this period is relatively small.
Claims (4)
1. A method for outputting multi-modal results based on a large model of an API endpoint, comprising:
acquiring an input text, and classifying the sensitivity level of the input text;
Creating a drawing option API endpoint according to the sensitivity level, which specifically comprises:
1) Selecting a back-end framework and constructing an API; the back-end framework selects any one of Django, flash, express. Js, ruby on Rails, spring Boot and ASP. NET;
2) Defining routes that include one or more routes to process the image generation requests, and each route will correspond to a different endpoint, each endpoint for a different type of image generation request;
3) Analyzing request parameters sent from the front end, and generating an image according to the parameters in a self-definition mode;
calling a drawing option API through the large model to draw pictures; the method specifically comprises the following steps:
1) Analyzing request parameters sent from the front end; the parameters include color, location, title;
2) Generating custom images according to the parameters;
3) Selecting different drawing libraries according to the analyzed parameters, generating images by using the selected drawing libraries, and storing the images;
acquiring an adjustment text, transmitting the adjustment text to an API endpoint of a drawing option by the large model, and drawing the picture again;
And the large model logically infers the input text and the adjustment text, and outputs a functional result by combining the generated pictures.
2. The API endpoint-based large model outputting multimodal results method as claimed in claim 1, further comprising returning URL of custom generated image or image data as a response to the front end; if the image is stored on the server, returning to the URL of the image; if the image data to be embedded is in response, encoding the image data into a Base64 format, and returning in JSON response;
In front-end use, the back-end API endpoint is invoked by initiating a POST request, and receives the image URL or image data in the response, and then displays the image on the user interface.
3. The API endpoint-based large model outputting multimodal results method as claimed in claim 1, wherein classifying the sensitivity level of the input text comprises:
the integrated sensitivity score is calculated using a weighted average method:
Comprehensive sensitivity score= (PII number score x weight 1) + (field sensitivity score x weight 2) + (data access score x weight 3);
Wherein PII is a number score, range: 0 to 1,1 means that all data contains PII; PII represents information that can be used to uniquely identify, contact or locate a person's identity; field sensitivity score, range: 0 to 1,1 means that all fields are highly sensitive; data access score, range: 0 to 1,1 means that data is often widely accessed; weight 1, weight 2, weight 3 is defined according to the needs and policies of the organization, which are used to determine the importance of different indicators; the score ranges were divided into four classes of non-sensitive (0-0.3), low sensitive (0.31-0.6), medium sensitive (0.61-0.8), high sensitive (0.81-1).
4. A large model output multi-modal result system based on API endpoints, comprising: a memory and a processor; the memory has stored thereon a computer program which, when executed by the processor, implements the API endpoint-based large model output multimodal results method of any of claims 1 to 3.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311717511.0A CN117708347B (en) | 2023-12-14 | 2023-12-14 | Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311717511.0A CN117708347B (en) | 2023-12-14 | 2023-12-14 | Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117708347A CN117708347A (en) | 2024-03-15 |
| CN117708347B true CN117708347B (en) | 2024-08-20 |
Family
ID=90149247
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311717511.0A Active CN117708347B (en) | 2023-12-14 | 2023-12-14 | Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117708347B (en) |
Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117114112A (en) * | 2023-10-16 | 2023-11-24 | 北京英视睿达科技股份有限公司 | Vertical field data integration method, device, equipment and medium based on large model |
Family Cites Families (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190279075A1 (en) * | 2018-03-09 | 2019-09-12 | Nvidia Corporation | Multi-modal image translation using neural networks |
| CN111063006A (en) * | 2019-12-16 | 2020-04-24 | 北京亿评网络科技有限公司 | Image-based method, device, device and storage medium for generating literary works |
| CN113742460B (en) * | 2020-05-28 | 2024-03-29 | 华为技术有限公司 | Method and device for generating virtual characters |
| US11461681B2 (en) * | 2020-10-14 | 2022-10-04 | Openstream Inc. | System and method for multi-modality soft-agent for query population and information mining |
| US11769018B2 (en) * | 2020-11-24 | 2023-09-26 | Openstream Inc. | System and method for temporal attention behavioral analysis of multi-modal conversations in a question and answer system |
| CN113762237B (en) * | 2021-04-26 | 2023-08-18 | 腾讯科技(深圳)有限公司 | Text image processing method, device, equipment and storage medium |
| US20230306131A1 (en) * | 2022-02-15 | 2023-09-28 | Qohash Inc. | Systems and methods for tracking propagation of sensitive data |
| CN116186312A (en) * | 2022-12-29 | 2023-05-30 | 北京霍因科技有限公司 | Multi-mode data enhancement method for data sensitive information discovery model |
| CN116932708A (en) * | 2023-04-18 | 2023-10-24 | 清华大学 | Open domain natural language reasoning question answering system and method driven by large language model |
| CN116881462A (en) * | 2023-07-31 | 2023-10-13 | 阿里巴巴(中国)有限公司 | Text data processing, text representation and text clustering method and equipment |
| CN116992010A (en) * | 2023-08-02 | 2023-11-03 | 无知(北京)智慧科技有限公司 | Content distribution and interaction method and system based on multi-mode large model |
| CN117057318A (en) * | 2023-08-17 | 2023-11-14 | 亚信科技(中国)有限公司 | Domain model generation method, device, equipment and storage medium |
| CN117152573A (en) * | 2023-08-30 | 2023-12-01 | 杭州码全信息科技有限公司 | Transformer and data enhancement based network media multi-mode information extraction method |
| CN116994069B (en) * | 2023-09-22 | 2023-12-22 | 武汉纺织大学 | Image analysis method and system based on multi-mode information |
-
2023
- 2023-12-14 CN CN202311717511.0A patent/CN117708347B/en active Active
Patent Citations (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117114112A (en) * | 2023-10-16 | 2023-11-24 | 北京英视睿达科技股份有限公司 | Vertical field data integration method, device, equipment and medium based on large model |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117708347A (en) | 2024-03-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10628680B2 (en) | Event-based image classification and scoring | |
| US11870741B2 (en) | Systems and methods for a metadata driven integration of chatbot systems into back-end application services | |
| CN111159413A (en) | Log clustering method, device, equipment and storage medium | |
| CN107861981B (en) | Data processing method and device | |
| US12289283B2 (en) | Automated image processing and insight presentation | |
| CN109255037B (en) | Method and apparatus for outputting information | |
| KR20230013280A (en) | Classify and discover client application content | |
| US8086675B2 (en) | Generating a fingerprint of a bit sequence | |
| US20230222190A1 (en) | Systems and methods for providing user validation | |
| JP2019520662A (en) | Content-based search and retrieval of trademark images | |
| CN113157956B (en) | Picture searching method, system, mobile terminal and storage medium | |
| US10474926B1 (en) | Generating artificial intelligence image processing services | |
| US20230328101A1 (en) | Systems and methods of detecting anomalous websites | |
| CN110889036A (en) | Multi-dimensional information processing method and device and terminal equipment | |
| US11605464B2 (en) | Systems and methods for machine learning-based state prediction and visualization | |
| CN113544682A (en) | Data privacy using a Podium mechanism | |
| CN106156098B (en) | Error correction pair mining method and system | |
| CN114299074B (en) | Video segmentation method, device, equipment and storage medium | |
| CN117708347B (en) | Method and system for outputting multi-mode result by large model based on API (application program interface) endpoint | |
| EP2372626A1 (en) | Method of image processing with dynamic anonymisation | |
| Yang et al. | No‐reference image quality assessment via structural information fluctuation | |
| CN112433651B (en) | Area identification method, device, storage medium and device | |
| SE543229C2 (en) | Method and system for determining a refined gaze point of a user | |
| CN115375936A (en) | An artificial intelligence verification and monitoring method, system and storage medium | |
| US11170171B2 (en) | Media data classification, user interaction and processors for application integration |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |