[go: up one dir, main page]

CN113902018B - Image sample generation method, device, computer readable medium and electronic device - Google Patents

Image sample generation method, device, computer readable medium and electronic device Download PDF

Info

Publication number
CN113902018B
CN113902018B CN202111187073.2A CN202111187073A CN113902018B CN 113902018 B CN113902018 B CN 113902018B CN 202111187073 A CN202111187073 A CN 202111187073A CN 113902018 B CN113902018 B CN 113902018B
Authority
CN
China
Prior art keywords
sample
image
image sample
samples
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111187073.2A
Other languages
Chinese (zh)
Other versions
CN113902018A (en
Inventor
成武超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202111187073.2A priority Critical patent/CN113902018B/en
Publication of CN113902018A publication Critical patent/CN113902018A/en
Application granted granted Critical
Publication of CN113902018B publication Critical patent/CN113902018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

本申请属于人工智能技术领域,具体涉及一种图像样本生成方法、装置、计算机可读介质以及电子设备,在本申请实施例提供的技术方案中,本申请通过获取图像样本库文件,并提取样本库文件中的图像样本的样本编码值,然后基于样本编码值利用离屏canvas算法提取图像,然后再将该图像通过图像数据获取算法转化成图形特征,利用该方法可以提高图像样本的采集效率,而且,本申请采集到图像样本之后,对图像样本进行裁剪和缩放,以适应卷积训练的进行,提高了图像识别的准确性和样本卷积训练的效率,有利于图像识别技术的发展。

The present application belongs to the field of artificial intelligence technology, and specifically relates to an image sample generation method, device, computer-readable medium and electronic device. In the technical solution provided in the embodiments of the present application, the present application obtains an image sample library file, extracts the sample coding value of the image sample in the sample library file, and then extracts the image based on the sample coding value using an off-screen canvas algorithm, and then converts the image into a graphic feature through an image data acquisition algorithm. This method can improve the efficiency of image sample acquisition. Moreover, after the image sample is acquired, the present application crops and scales the image sample to adapt to the convolution training, thereby improving the accuracy of image recognition and the efficiency of sample convolution training, which is conducive to the development of image recognition technology.

Description

Image sample generation method, device, computer readable medium and electronic equipment
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an image sample generation method, an image sample generation device, a computer readable medium and electronic equipment.
Background
Along with the continuous development of artificial intelligence, the development of image recognition technology is rapid, wherein a convolutional neural network is needed in the image recognition method, the convolutional neural network is formed by acquiring a large number of samples to carry out convolutional training to form a model, and then image recognition is carried out based on the trained model.
However, the generation of existing image samples has the following problems. Firstly, the existing image sample acquisition process is very complicated, screenshot is needed to be carried out on a webpage or downloading is carried out in a picture library, and the acquisition efficiency is low; secondly, the existing image samples are guided into the convolution model for training without being processed, so that the convolution training cannot adapt to the image samples with stretching deformation, the accuracy of image recognition is reduced, the calculated amount of the convolution training is increased, the training efficiency is reduced, and the development of the image recognition technology is not facilitated.
It should be noted that the information disclosed in the above background section is only for enhancing understanding of the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The application aims to provide an image sample generation method, an image sample generation device, a computer readable medium and electronic equipment, which at least overcome the technical problems of low sample collection efficiency, low sample training accuracy and the like in the related technology to a certain extent.
Other features and advantages of the application will be apparent from the following detailed description, or may be learned by the practice of the application.
According to an aspect of an embodiment of the present application, there is provided an image sample generation method including:
Obtaining an image sample library file, wherein the image sample library file comprises a sample coding value set of image samples, and the sample coding value comprises a hexadecimal file used for representing the image samples;
Extracting a sample coding value of the image sample, and rendering the image sample corresponding to the sample coding value by using an off-screen canvas algorithm based on the sample coding value to obtain a first sample characteristic;
Converting the first sample feature into a graphic feature by using an image data acquisition algorithm, and cutting and scaling the graphic feature to obtain a second sample feature, wherein the image data acquisition algorithm is used for acquiring pixel data of the first sample feature;
and encoding the second sample characteristic through an encoding algorithm and then outputting the second sample characteristic to obtain a convolution image sample.
In some embodiments of the present application, based on the above technical solution, the method for clipping and scaling the graphic feature to obtain the second sample feature includes:
Traversing pixels of the graphic feature, obtaining a minimum non-blank rectangular area of the graphic feature, and expanding the minimum non-blank rectangular area into a non-blank square area;
And cutting and scaling the graphic features according to the non-blank square areas to obtain second sample features.
In some embodiments of the present application, based on the above technical solution, based on the sample coding value, rendering an image sample corresponding to the sample coding value by using an off-screen canvas algorithm to obtain a first sample feature, including:
Calling an image sample mapping table by using an off-screen canvas algorithm, and acquiring image sample symbol information corresponding to the sample coding value acquisition from the image sample mapping table;
and drawing the symbol information of the image sample onto a canvas by using an off-screen canvas algorithm to obtain a first sample characteristic.
In some embodiments of the present application, based on the above technical solutions, before the image sample library file is acquired, the method further includes constructing the image sample library file, where the method for constructing the image sample library file includes:
establishing a mapping relation among image sample pixel information, image sample symbol information and sample coding values of the image samples to form an image sample mapping table;
binding the sample coding value of the image sample with the name of the image sample to form a key value pair;
and storing the key value pairs into an array to form an image sample library file.
In some embodiments of the present application, based on the above technical solution, after the second sample feature is encoded by an encoding algorithm and then output, the method further includes:
Traversing the image samples in the image sample library file, and converting the image samples into convolution image samples;
converging the convolution image samples to obtain a convolution image sample set;
And converting the convolution image sample set into a json format file.
In some embodiments of the present application, based on the above technical solution, after converting the convolved image sample set into a json format file, the method further includes:
the json format file is scattered by using a scattering algorithm to form a random sample set;
selecting the random sample set, taking seventy percent of image samples in the random sample set as training samples, twenty percent of image samples in the random sample set as verification samples, and ten percent of image samples in the random sample set as test samples;
Training a preset convolutional neural network model by taking the training sample as an input sample and a sample label corresponding to the training sample as an output sample to obtain an image recognition model, inputting the verification sample and the test sample into the image recognition model, and adjusting coefficients of the image recognition model according to the known sample labels of the verification sample and the test sample and the predicted sample label output by the image recognition model to enable the predicted sample label output by the image recognition model to be consistent with the known sample labels of the verification sample and the test sample.
According to an aspect of an embodiment of the present application, there is provided an image recognition method including:
Acquiring an image sample to be identified;
Cutting and scaling the image sample to be identified to obtain sample characteristics to be identified;
And inputting the sample characteristics to be identified into the image identification model to be identified, and obtaining the label of the image sample to be identified.
According to an aspect of an embodiment of the present application, there is provided an image sample generation apparatus including:
an acquisition module configured to acquire an image sample library file comprising a set of sample encoded values for an image sample, the sample encoded values comprising hexadecimal files representing the image sample;
the feature extraction module is configured to extract a sample code value of the image sample, and render the image sample corresponding to the sample code value by using an off-screen canvas algorithm based on the sample code value to obtain a first sample feature;
The feature processing module is configured to convert the first sample feature into a graphic feature by using an image data acquisition algorithm, and cut and scale the graphic feature to obtain a second sample feature, wherein the image data acquisition algorithm is used for acquiring pixel data of the first sample feature;
And the output module is configured to encode the second sample characteristic through an encoding algorithm and then output the second sample characteristic to obtain a convolution image sample.
In some embodiments of the present application, based on the above technical solution, the feature processing module includes:
a region acquisition unit configured to traverse pixels of the graphic feature, acquire a minimum non-blank rectangular region of the graphic feature,
An expansion unit configured to expand the minimum non-blank rectangular area into a non-blank square area;
And the clipping and scaling unit is configured to clip and scale the graphic features according to the non-blank square area to obtain second sample features.
In some embodiments of the present application, based on the above technical solution, the feature extraction module includes:
The symbol acquisition unit is configured to call an image sample mapping table by using an off-screen canvas algorithm, and acquire image sample symbol information corresponding to the sample coding value acquisition from the image sample mapping table;
And the drawing unit is configured to draw the image sample symbol information onto a canvas by using an off-screen canvas algorithm to obtain a first sample characteristic.
In some embodiments of the present application, based on the above technical solution, the image sample generating device further includes a sample library construction unit including:
The sample mapping unit is configured to establish a mapping relation among image sample pixel information, image sample symbol information and sample coding values of the image samples to form an image sample mapping table;
a binding unit configured to bind the sample code value of the image sample with the name of the image sample to form a key value pair;
and the storage unit is configured to store the key value pairs into an array to form an image sample library file.
In some embodiments of the present application, based on the above technical solutions, the image sample generating device further includes a sample processing module, the sample processing module including:
a traversing unit configured to traverse the image samples in the image sample library file, converting the image samples into convolution image samples;
The aggregation unit is configured to aggregate the convolution image samples to obtain a convolution image sample set;
and a format conversion unit configured to convert the convolution image sample set into a json format file.
In some embodiments of the present application, based on the above technical solutions, the sample processing module further includes:
the scattering unit is configured to scatter the json format file by using a scattering algorithm to form a random sample set;
a sample selection unit configured to select the random sample set, taking seventy percent of the image samples in the random sample set as training samples, twenty percent of the image samples in the random sample set as verification samples, and ten percent of the image samples in the random sample set as test samples;
The model training unit is configured to train a preset convolutional neural network model by taking the training sample as an input sample and taking a sample label corresponding to the training sample as an output sample, obtain an image recognition model, input the verification sample and the test sample into the image recognition model, and adjust coefficients of the image recognition model according to the known sample labels of the verification sample and the test sample and the predicted sample label output by the image recognition model, so that the predicted sample label output by the image recognition model is consistent with the known sample labels of the verification sample and the test sample.
According to an aspect of an embodiment of the present application, there is provided an image recognition apparatus including:
A sample acquisition module configured to acquire an image sample to be identified;
The sample feature extraction module is configured to cut and scale the image sample to be identified to obtain sample features to be identified;
The identification module is respectively configured to input the characteristics of the sample to be identified into the image identification model for identification, so as to obtain the label of the image sample to be identified.
According to an aspect of the embodiments of the present application, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements an image sample generation method as in the above technical solution.
According to an aspect of an embodiment of the present application, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image sample generation method as in the above technical solution via execution of the executable instructions.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the image sample generation method as in the above technical solution.
In the technical scheme provided by the embodiment of the application, the beneficial effects of the application include: firstly, the method acquires the image sample library file, extracts the sample coding value of the image sample in the sample library file, then extracts the image by using an off-screen canvas algorithm based on the sample coding value, and then converts the image into the graphic feature by using the image data acquisition algorithm; secondly, after the image sample is acquired, the image sample is cut and scaled to adapt to the implementation of convolution training, so that the problem that the tensile deformed image sample cannot be identified by the convolution training is avoided, and the accuracy of image identification is improved; thirdly, the image samples are output after being encoded by the encoding algorithm, so that the file size of the image samples can be effectively reduced, the calculated amount of convolution training is reduced, the training efficiency is improved, and the development of image recognition technology is facilitated.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is evident that the drawings in the following description are only some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 schematically shows a block diagram of an exemplary system architecture to which the technical solution of the present application is applied.
Fig. 2 schematically shows a flow chart of the image sample generation method of the present application.
Fig. 3 schematically shows a flow chart of the method of constructing an image sample library file according to the present application.
Fig. 4 schematically shows an arrow image sample mapping table of the present application.
Fig. 5 schematically shows a flow chart of a method of the application for obtaining a first sample characteristic.
Fig. 6 schematically shows a flow chart of a method of obtaining a graphical feature according to the application.
Fig. 7 schematically shows a flow chart of a method of obtaining a second sample feature according to the application.
Fig. 8 schematically shows a flow chart of a method of the application for further processing of convolved image samples.
Fig. 9 schematically shows a flowchart of a training method of the image recognition model of the present application.
Fig. 10 schematically shows a flow chart of the image recognition method of the present application.
Fig. 11 schematically shows a block diagram of an image sample generating apparatus provided by an embodiment of the present application.
Fig. 12 schematically shows a block diagram of an image recognition apparatus provided by an embodiment of the present application.
Fig. 13 schematically shows a block diagram of a computer system suitable for use in implementing embodiments of the application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the application may be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The block diagrams depicted in the figures are merely functional entities and do not necessarily correspond to physically separate entities. That is, the functional entities may be implemented in software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow diagrams depicted in the figures are exemplary only, and do not necessarily include all of the elements and operations/steps, nor must they be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the order of actual execution may be changed according to actual situations.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Image recognition technology is an important area of artificial intelligence. It refers to a technique of performing object recognition on an image to recognize objects and objects of various modes. Image recognition techniques may be trained using convolutional neural networks (Convolutional Neural Networks, CNN), a type of feed-forward neural network (Feedforward Neural Networks) that contains convolutional computations and has a depth structure, most commonly used to analyze visual images.
In the image recognition technology, the image sample is an important basis for the accuracy of the image recognition technology, and if there are not enough image samples or the image sample itself is flawed, the accuracy of the image recognition will be reduced. At present, the acquisition efficiency of the image sample is very low often through downloading pictures or through a network screenshot mode, and meanwhile, the acquired image sample is directly subjected to convolution training without being processed, so that the problems of low training efficiency and low training accuracy can be caused.
In order to solve the above technical problems, the present application discloses an image sample generation method, an image sample generation device, a computer readable medium, and an electronic apparatus, and the contents of the present application will be further described by various aspects.
Fig. 1 schematically shows a block diagram of an exemplary system architecture to which the technical solution of the present application is applied.
As shown in fig. 1, system architecture 100 may include a terminal device 110, a network 120, and a server 130. Terminal device 110 may include various electronic devices such as smart phones, tablet computers, notebook computers, desktop computers, and the like. The server 130 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. Network 120 may be a communication medium of various connection types capable of providing a communication link between terminal device 110 and server 130, and may be, for example, a wired communication link or a wireless communication link.
The system architecture in embodiments of the present application may have any number of terminal devices, networks, and servers, as desired for implementation. For example, the server 130 may be a server group composed of a plurality of server devices. In addition, the technical solution provided in the embodiment of the present application may be applied to the terminal device 110, or may be applied to the server 130, or may be implemented by the terminal device 110 and the server 130 together, which is not limited in particular.
The foregoing describes the contents of an exemplary system architecture to which the present application is applied, and the image sample generation method of the present application is described next. The application provides an image sample generation method, which comprises the steps of S210-S240.
In step S210: an image sample library file is obtained, the image sample library file comprising a set of sample encoded values for the image samples, the sample encoded values comprising hexadecimal files representing the image samples.
The image sample library file is also called a font library file, is a format adopted by a webpage, and different browsers correspondingly use different font library files. The file formats of the image sample library which can be used by the application can be a tf file, a otf file, a woff file, an eot file and an svg file, and the corresponding suffixes are respectively. Tf,. Otf,. Woff,. Eot,. Svg, which are not limited to the five types of file formats, but can be other font formats which can be used as web pages.
The tff file (TureType file) is a novel mathematical font description technology. The method uses mathematical functions to describe the outline of the font, and comprises the instructions of font construction, color filling, digital description functions, flow condition control, grid processing control, additional prompt control and the like, and can be used for most browsers. The otf file (OpenType file) is also an outline font and is built on the basis of TureType, so that more functions are provided, and the file format is also applicable to most browsers. The woff file (WEB open font format file) is a font format standard adopted by WEB pages, and is generally used as the best format in WEB fonts. The font format not only enables efficient compression to reduce file size, but also does not contain encryption and is not limited by Digital Rights Management (DRM). Is an IE-specific font that can be created from the tff file. SVG files are an image file format, a format based on SVG font rendering.
The image sample library file contains data such as pictures, characters and typesetting of the web page, the data are all in a coding mode in the image sample library file, for the image samples, the image sample library contains sample coding value sets of corresponding image samples of all pictures in the web page, the sample coding values are used for representing hexadecimal files of the image samples, for example, the sample coding values can be represented as 0x, and the sample coding values are in one-to-one correspondence and are bound with the corresponding image samples.
The foregoing of the present application is exemplified below, for example, the image sample of the arrow icon needs to be acquired, and the web page format of the web page, for example, the woff file format, may be acquired after searching the web page corresponding to the material library adjusted to the network. The corresponding woff file includes the sample code values of all arrow icons in the web page, so that a sample library file containing the sample code values can be obtained in step S210.
For image samples, when some images do not exist in a material library or cannot acquire corresponding sample library files, for example, for newly defined products, no material exists on a network, so that the sample library files can be constructed according to bitmaps of the products obtained by a method of collecting in advance or uploading by a user, and the specific method is as follows.
In one embodiment of the present application, as shown in fig. 3, fig. 3 schematically illustrates a flowchart of the method of the present application for constructing an image sample library file. The image sample library file may be constructed before the image sample library file is acquired, wherein the method of constructing the image sample library file includes steps S310-S330.
Step S310: establishing a mapping relation among image sample pixel information, image sample symbol information and sample coding values of the image samples to form an image sample mapping table;
the image sample pixel information corresponds to the actual picture information of the image sample, generally in the form of a bitmap (ImageBitmap), and because the bitmap file needs to occupy a large memory, for an off-screen canvas algorithm, the bitmap file is not directly processed, but the bitmap file is mapped with the image sample symbol information, and then the image sample symbol information is rendered by the off-screen canvas algorithm. The application can automatically generate the image sample symbol information of the bitmap file by using a symbol generation algorithm, wherein the image sample symbol information is a random symbol, and the symbol information is ensured to be in one-to-one correspondence with the image sample pixel information and is not repeated mutually. However, for the symbol information of the image sample, because of the special symbol, the storage of the sample library file is difficult, for example, the storage of the special symbol in the woff file is complex and cumbersome, so the application introduces the sample coding value of the image sample to establish the mapping relation between the sample coding value of the image sample and the symbol information of the image sample, thereby realizing the mapping relation among the pixel information of the image sample, the symbol information of the image sample and the sample coding value of the image sample, and obtaining other contents by using a specific tool according to any one of the pixel information, the symbol information of the image sample and the sample coding value of the image sample.
The content of the image sample mapping table of the present application is illustrated in the following, and as shown in fig. 4, fig. 4 schematically illustrates an arrow image sample mapping table of the present application. The mapping relation of several arrows is shown in the table, for example, the pixel information of the image sample corresponding to the right arrow is the right arrow image, and the sample code value of the image sample corresponding to the right arrow is 0x00000000000001, the symbol information of the image sample "+|! ". Therefore, using step S310, an image sample map can be obtained, and step S320 is continued.
Step S320: binding the sample code value of the image sample with the name of the image sample to form a key value pair.
The sample code values may be randomly allocated, or may be set according to the sequence of the image samples or the file names, for example, the file names of the image samples are p1-p100, the corresponding image samples are automatically ordered according to the sequence of 1-100, then the sample code values may be 0x 1-0x 100, so that the names of the image samples and the corresponding sample code values may be bound to form a key value pair, after the binding, the sample code values and the names of the image samples form a one-to-one correspondence, the corresponding samples may be obtained through one sample code value, and step S330 may be performed after the binding is completed.
Step S330: and saving the key value pairs into an array to form an image sample library file.
After the key value pairs are formed, the key value pairs corresponding to all the image samples can be stored in the array to form an image sample library file, and at the moment, different file formats can be generated according to different browsers, for example, the image sample library file can be stored as a woff file format.
The image sample library file is obtained through the above steps, and step S220 may be performed at this time.
In step S220: and extracting a sample code value of the image sample, and rendering the image sample corresponding to the sample code value by using an off-screen canvas algorithm based on the sample code value to obtain a first sample characteristic.
The method for extracting the sample code value of the image sample, for example, may extract the sample code value 0x 1 corresponding to the first image sample, and then may render the image sample corresponding to the sample code value by using the off-screen canvas algorithm based on the sample code value to obtain the first sample feature, which is specifically as follows.
In one embodiment of the present application, as shown in fig. 5, fig. 5 schematically illustrates a flow chart of a method of the present application for obtaining a first sample feature. The method for rendering the image sample corresponding to the sample coding value by using an off-screen canvas algorithm based on the sample coding value to obtain the first sample characteristic comprises the following steps of S510-S520:
Step S510: and calling an image sample mapping table by using an off-screen canvas algorithm, and acquiring image sample symbol information corresponding to sample coding value acquisition from the image sample mapping table.
As can be seen from step S310, the mapping relationship among the image sample pixel information, the image sample symbol information and the sample code value of the image sample is established in the image sample mapping table, and the interface for calling the image sample symbol information and the sample code value of the image sample is configured in the off-screen canvas algorithm, so that the corresponding image sample symbol information can be obtained based on the sample code value by calling the image sample mapping table through the off-screen canvas algorithm. For example, referring to the example of step S310, by sample code value 0x00000000000001, image sample symbol information "+|! ". Step S520 may continue at this point.
Step S520: and drawing the symbol information of the image sample onto a canvas by using an off-screen canvas algorithm to obtain a first sample characteristic.
Before introducing the off-screen Canvas algorithm, firstly introducing the Canvas algorithm, wherein the Canvas algorithm is a new algorithm in the HTML5, and JavaScript provides a series of APIs of the Canvas, so that the Canvas is directly drawn by using the JavaScript, and a developer can draw a series of graphs on the Canvas, so that the flexibility is high. The Canvas is written in the HTML file as follows:
< canvas id= "canvas" with= "width" height= "height" >/canvas >
Where the id attribute is available for all HTML elements, the Canvas has only the latter two (width and height controlled respectively) with the Canvas algorithm supporting 90% browser usage. While canvas algorithms are practical for drawing graphics, script parsing runs are one of the biggest problems in achieving smooth user feedback in some web sites. Because Canvas computation and rendering and user operation responses both occur in the same thread, canvas's drawing functions are bound to the < Canvas > tag, meaning CANVAS API and DOM are coupled, and computation operations in animation (sometimes time consuming) will cause App to clip on, degrading the user experience. Whereas the off-screen Canvas algorithm (OffscreenCanvas) used by the present application decouples DOM and CANVAS API by shifting the Canvas out of the screen. Due to the decoupling, the drawing smoothness can be effectively improved, and the user experience is improved.
The method specifically uses a text filling function (fillText) of an off-screen canvas algorithm to draw the symbol information of the image sample on the canvas to obtain the first sample characteristic, and uses the method to generate the image sample, so that the user experience can be improved, and other tasks of the user on the screen are not affected. The first sample feature obtained in this step cannot form a bitmap file, and is generated in another thread that moves out of the screen, so that the user cannot acquire the first sample feature through the screen. Whereas the first sample feature can only be acquired by the image data acquisition algorithm (GETIMAGEDATA) and the first sample feature is converted into an image feature, as in step S230.
In step S230: and converting the first sample characteristic into a graphic characteristic by using an image data acquisition algorithm, and cutting and scaling the graphic characteristic to obtain a second sample characteristic, wherein the image data acquisition algorithm is used for acquiring pixel data of the first sample characteristic.
The image data acquisition algorithm (GETIMAGEDATA) is used for acquiring pixel data of a designated rectangular area on the canvas, and the designated rectangular area corresponds to the area corresponding to the drawing of the image sample symbol information on the canvas in step S520. And the pixel data includes RGB values and transparency of all pixel points in the specified rectangular area. For example, two pixel points on the canvas are acquired, namely pure black and pure white with 100% opacity, and the corresponding pixel data can be expressed as follows: rgba (0, 1), rgba (255,255,255,1), wherein the first three parameters represent RGB colors and the last parameter 1 represents transparency a, ranging from (0, 1), but since the values of the color data are all in the range of (0, 255). It is therefore necessary here to correspondingly convert the transparency into the form of (0, 255) also in a manner of 0= >0, 1= > 255. After interpreting the above information, the process of the present application for converting a first sample feature into a graphical feature is further described below.
In one embodiment of the application, as shown in FIG. 6, FIG. 6 schematically illustrates a flow chart of a method of the application for obtaining a graphical feature. In one embodiment of the present application, the image data acquisition algorithm is used to convert the first sample feature into a graphical feature, comprising steps S610-S630:
step S610: and calling an image sample mapping table by using an image data acquisition algorithm, and acquiring image sample pixel information corresponding to the image sample symbol information from the image sample mapping table.
As can be seen from step S310, the mapping relationship among the image sample pixel information, the image sample symbol information and the sample code value of the image sample is established in the image sample mapping table, and the interface for calling the image sample symbol information and the image sample pixel information is configured in the image data acquisition algorithm (GETIMAGEDATA), so that the image sample pixel information can be obtained based on the image sample symbol information by calling the image sample mapping table through the image data acquisition algorithm (GETIMAGEDATA). For example, referring to the example of step S310, by the image sample symbol information "+|! "image sample pixel information corresponding to the arrow to the right, i.e., the composition of RGBA corresponding to the arrow, is obtained. Step S620 may continue at this point.
Step S620: and acquiring a rectangular region corresponding to the symbol information of the image sample in the first sample characteristic as a drawing region.
The area corresponds to the area drawn on the canvas in step S520, and generally takes a rectangular area, that is, an area with a specific size is taken outwards with the symbol information of the image sample drawn on the canvas as the center, where the specific area may be set as a default according to a preset, for example, all default to be 30mm x 20mm long. The drawing area of the present application may also be determined according to image sample pixel information, for example, the image sample pixel information includes pixel information of all pixel points, and the corresponding drawing area is the smallest rectangular area including all pixel points. After the drawing area is set in the above manner, step S630 may be continued.
Step S630: image sample pixel information is rendered on the drawing area.
The method for rendering the pixel information of the image sample on the drawing area is to draw and generate an image in a newly built canvas by utilizing rgba pixel data, wherein the canvas is directly displayed in a display, and a user can see graphic features drawn in the canvas through the display, for example, a right arrow obtained through rendering.
The sample image corresponding to the graphic feature obtained by the method has the advantage that the size of the drawing area is fixed, so that more blank can exist in the corresponding image sample. Or more blank can exist in the image sample due to the shape of the image sample itself. Therefore, these blank areas need to be removed, so that the influence of the image samples with stretching deformation or the graphic samples with a large number of invalid areas on the convolution training can be avoided. The method of the present application for processing graphic features is as follows.
In one embodiment of the application, as shown in FIG. 7, FIG. 7 schematically illustrates a flow chart of a method of the application for obtaining a second sample feature. The method for clipping and scaling the graphic features to obtain the second sample features includes steps S710-S720.
Step S710: and traversing pixels of the graphic features to obtain a minimum non-blank rectangular area of the graphic features.
If the graphic feature is obtained in the above steps S610 to S630, the drawing area itself is a rectangular area, and thus step S710 may not be performed. However, if the acquired graphic feature is acquired according to a contour instead of being rectangular by other methods, it is necessary to unify the image samples at this time by using step S710. The specific method is to traverse all pixels in the image feature and then obtain the minimum non-blank rectangular area of the image feature, which has rgba values for the pixels, while the blank area may be a full white area, i.e. rgba values are (255, 255, 255, 255), and the transparency corresponds to 100%. Thus, a minimum non-empty rectangular area may be determined by taking the pixels of the graphical feature. And the smallest non-blank rectangular area corresponds to the smallest rectangular area that encapsulates the graphical feature. After the minimum rectangular area is obtained, step S720 may be continued.
Step S720: the minimum non-blank rectangular area is expanded to a non-blank square area.
The method for expanding the minimum non-blank rectangular area into the non-blank square area is to judge the length and width of the minimum non-blank rectangular area, expand the smaller area into the same as the larger value, for example, one rectangular area is 30mm by 20mm, and the expanded non-blank square area is 30mm by 30 mm.
Step S730: and cutting and scaling the graphic features according to the non-blank square areas to obtain second sample features.
After the non-blank square area is determined, other blank areas of the graphic feature can be cut, then all image samples are scaled, and the specific scaling is determined according to the scaled size.
After the second sample feature of a fixed size is obtained using the above steps, step S240 may continue.
In step S240: and encoding the second sample characteristic through an encoding algorithm and then outputting the second sample characteristic to obtain a convolution image sample.
The function of step S240 of the present application is to encode the second sample feature by a specific encoding method and then output the encoded second sample feature, so as to further reduce the size of the file, thereby reducing the operand of the convolution training. The encoding method used in the present application may be base64 encoding, where base64 encoding is one of encoding modes for transmitting 8Bit byte codes, and is a method for representing binary data based on 64 printable characters. base64 encoding is a binary to character process that can be used to convey longer identification information in an HTTP environment. The convolution image sample is obtained through the steps, so that the convolution training can be performed by using the convolution image sample.
The convolution image sample is obtained by the steps, and the application further comprises the following steps for further processing the obtained convolution image sample.
In one embodiment of the present application, as shown in FIG. 8, FIG. 8 schematically illustrates a flow chart of a method of the present application for further processing a convolved image sample. After the second sample feature is encoded by the encoding algorithm and then output, the corresponding method of the present application further includes step S810-step S830 after obtaining the convolved image sample.
Step S810: traversing the image samples in the image sample library file, and converting the image samples into convolution image samples.
The convolution image sample of one image sample in the image sample library is obtained through the steps S210-S240, and the application also needs to convert other image samples in the image sample library into convolution image samples by using the same method. By the method, a sufficient number of convolution image samples are obtained, and if the convolution image samples are input into a model one by one for convolution training, the steps are complicated, so that all the convolution image samples are processed, and the specific method is as shown in step S820.
Step S820: and converging the convolution image samples to obtain a convolution image sample set.
After obtaining the convolution image sample set, the set may be input to the model for convolution training, and in order to further improve the efficiency of convolution training, the present application further includes step S830.
Step S830: the convolved image sample set is converted to a json format file.
And converting the full-quantity samples obtained in the last step into json format files by using a target tracking algorithm (Blob). Where Blob is an object type of JavaScript. The file operation object of HTML5 is a branch or subset of Blob. The json format file is a data format for storing and exchanging text information, is a lightweight data exchange format, and is utilized to input all convolution image sample sets into a model for convolution training, so that training efficiency can be improved. The training method of the application for the model is as follows.
In one embodiment of the present application, as shown in fig. 9, fig. 9 schematically shows a flowchart of a training method of the image recognition model of the present application. After converting the convolved image sample set into the json format file, the method for training the image recognition model includes steps S910-S930:
step S910: and scattering the json format file by using a scattering algorithm to form a random sample set.
The application uses a scattering algorithm (node) to randomly scatter json format files, and the node is a script operated by a server end and has the function of randomly scattering a data set to improve randomness. After the random sample set is obtained, the content of step S920 is continued.
Step S920: selecting a random sample set, taking seventy percent of image samples in the random sample set as training samples, taking twenty percent of image samples in the random sample set as verification samples, and taking ten percent of image samples in the random sample set as test samples.
After obtaining the various samples in step S920, the samples may be said to be input into the model for training.
Step S930: training a preset convolutional neural network model by taking a training sample as an input sample and taking a sample label corresponding to the training sample as an output sample to obtain an image recognition model, and inputting a verification sample and a test sample into the image recognition model; and adjusting the coefficients of the image recognition model according to the known sample labels of the verification sample and the test sample and the predicted sample labels output by the image recognition model, so that the predicted sample labels output by the image recognition model are consistent with the known sample labels of the verification sample and the test sample.
The image recognition model is trained and generated by utilizing the method corresponding to the step S930, wherein the sample used in the image recognition model is the sample obtained by the steps S210-S240, and the image sample can be predicted by utilizing the image recognition model. The application can be used for image recognition of most images, for example, the application can be used for image recognition of icons. The existing front-end page can use various icons, more icons are used along with the time, and the naming of the icons is also five-flower eight-door, so that the icons are difficult to constraint. When a developer restores a design draft, the developer often needs to search corresponding icons from hundreds of icons by naked eyes, and the application can identify the category of the icons by using the image identification model, thereby achieving the effect of automatic identification. Of course, the application can also be used for identifying other images by using the image identification model.
In the technical scheme provided by the embodiment of the application, the beneficial effects of the application include: firstly, the method acquires the image sample library file, extracts the sample coding value of the image sample in the sample library file, then utilizes an off-screen canvas algorithm to extract the image based on the sample coding value, then converts the image into the graphic feature through an image data acquisition algorithm, and acquires the image sample by utilizing the method, so that the image can be automatically acquired directly through the image sample library file of the webpage without screenshot and downloading the image sample, and the acquisition efficiency of the image sample is high; secondly, after the image sample is acquired, the image sample is cut and scaled to adapt to the implementation of convolution training, so that the problem that the tensile deformed image sample cannot be identified by the convolution training is avoided, and the accuracy of image identification is improved; thirdly, the image samples are output after being encoded by the encoding algorithm, so that the file size of the image samples can be effectively reduced, the calculated amount of convolution training is reduced, the training efficiency is improved, and the development of image recognition technology is facilitated.
The steps of the sample generation method of the present application are described in the above section, and the disclosure of other aspects of the present application is continued.
According to an aspect of an embodiment of the present application, as shown in fig. 10, fig. 10 schematically shows a flowchart of an image recognition method of the present application. The application provides an image recognition method, which comprises the steps of S1010 and S1030:
step S1010: and acquiring an image sample to be identified.
The image sample to be identified may be a specific icon or picture information.
Step S1020: and cutting and scaling the image sample to be identified to obtain the characteristics of the sample to be identified.
The method of clipping and scaling the image sample to be identified is the same as step S730, and will not be described here again.
Step S1030: and inputting the characteristics of the sample to be identified into the image identification model to identify, so as to obtain the label of the image sample to be identified.
The label of the image sample to be identified can be obtained by using the image identification model, so that the automatic identification of the image sample to be identified is realized.
The image recognition method according to the application performs image recognition, and the sample obtained in the steps S210-S240 is used, so that the recognition accuracy is higher.
It should be noted that although the steps of the methods of the present application are depicted in the accompanying drawings in a particular order, this does not require or imply that the steps must be performed in that particular order, or that all illustrated steps be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform, etc.
The above section describes the contents of the method of the application and the following continues to describe the contents of the apparatus of the application.
The following describes an embodiment of the apparatus of the present application, which may be used to perform the image sample generation method in the above-described embodiment of the present application. As shown in fig. 11, fig. 11 schematically shows a block diagram of an image sample generating apparatus provided by an embodiment of the present application.
According to an aspect of an embodiment of the present application, there is provided an image sample generation apparatus 1100 including:
An acquisition module 1110 configured to acquire an image sample library file comprising a set of sample encoded values for image samples, the sample encoded values being used to represent hexadecimal files of the image samples;
The feature extraction module 1120 is configured to extract a sample code value of the image sample, and render the image sample corresponding to the sample code value by using an off-screen canvas algorithm based on the sample code value to obtain a first sample feature;
the feature processing module 1130 is configured to convert the first sample feature into a graphic feature by using an image data acquisition algorithm, and crop and scale the graphic feature to obtain a second sample feature, where the image data acquisition algorithm is used to acquire pixel data of the first sample feature;
An output module 1140 is configured to encode the second sample feature by an encoding algorithm and output the encoded second sample feature to obtain a convolved image sample.
In one embodiment of the application, feature processing module 1130 includes:
A region acquisition unit configured to traverse pixels of the graphic feature, acquire a minimum non-blank rectangular region of the graphic feature,
An expansion unit configured to expand a minimum non-blank rectangular area into a non-blank square area;
and the clipping and scaling unit is configured to clip and scale the graphic features according to the non-blank square area to obtain second sample features.
In one embodiment of the application, the feature extraction module 1120 includes:
The symbol acquisition unit is configured to call an image sample mapping table by using an off-screen canvas algorithm, and acquire image sample symbol information corresponding to the sample coding value acquisition from the image sample mapping table;
and the drawing unit is configured to draw the symbol information of the image sample onto the canvas by using an off-screen canvas algorithm to obtain the first sample characteristics.
In one embodiment of the present application, the image sample generation apparatus further includes a sample library construction unit including:
The sample mapping unit is configured to establish a mapping relation among image sample pixel information, image sample symbol information and sample coding values of the image samples to form an image sample mapping table;
A binding unit configured to bind the sample code value of the image sample with the name of the image sample to form a key value pair;
And the storage unit is configured to store the key value pairs into an array to form an image sample library file.
In one embodiment of the present application, the image sample generation apparatus further includes a sample processing module including:
the traversing unit is configured to traverse the image samples in the image sample library file and convert the image samples into convolution image samples;
the aggregation unit is configured to aggregate the convolution image samples to obtain a convolution image sample set;
and a format conversion unit configured to convert the convolved image sample set into a json format file.
In one embodiment of the application, the sample processing module further comprises:
the scattering unit is configured to scatter json format files by using a scattering algorithm to form a random sample set;
a sample selection unit configured to select a random sample set, wherein seventy percent of image samples in the random sample set are used as training samples, twenty percent of image samples in the random sample set are used as verification samples, and ten percent of image samples in the random sample set are used as test samples;
The model training unit is configured to train a preset convolutional neural network model by taking a training sample as an input sample and taking a sample label corresponding to the training sample as an output sample, obtain an image recognition model, input a verification sample and a test sample into the image recognition model, and adjust coefficients of the image recognition model according to known sample labels of the verification sample and the test sample and a predicted sample label output by the image recognition model, so that the predicted sample label output by the image recognition model is consistent with the known sample labels of the verification sample and the test sample.
Specific details of the image generating apparatus provided in each embodiment of the present application have been described in the corresponding method embodiments, and are not described herein.
The content of the image sample generation device of the present application is described in the above section, and the image recognition device of the present application is continuously disclosed.
According to an aspect of the embodiment of the present application, as shown in fig. 12, fig. 12 schematically shows a block diagram of an image recognition apparatus provided by the embodiment of the present application. The present application provides an image recognition apparatus 1200, comprising:
A sample acquisition module 1210 configured to acquire an image sample to be identified;
The sample feature extraction module 1220 is configured to cut and scale the image sample to be identified to obtain sample features to be identified;
The identifying module 1230 is configured to input the features of the sample to be identified into the image identifying model to identify, and obtain the label of the image sample to be identified.
The specific details of the image recognition device provided by the present application have been described in the corresponding method embodiments, and are not described herein.
The foregoing describes the apparatus of the present application and continues to describe other aspects of the present application.
According to an aspect of the embodiments of the present application, there is provided a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements an image sample generation method as in the above technical solution.
According to an aspect of an embodiment of the present application, there is provided an electronic apparatus including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the image sample generation method as in the above technical solution via execution of the executable instructions.
According to an aspect of embodiments of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions so that the computer device performs the image sample generation method as in the above technical solution.
Fig. 13 schematically shows a block diagram of a computer system of an electronic device for implementing an embodiment of the application.
It should be noted that, the computer system 1300 of the electronic device shown in fig. 13 is only an example, and should not impose any limitation on the functions and the application scope of the embodiments of the present application.
As shown in fig. 13, the computer system 1300 includes a central processing unit 1301 (Central Processing Unit, CPU) which can perform various appropriate actions and processes according to a program stored in a Read-Only Memory 1302 (ROM) or a program loaded from a storage portion 1308 into a random access Memory 1303 (Random Access Memory, RAM). In the random access memory 1303, various programs and data necessary for the system operation are also stored. The cpu 1301, the rom 1302, and the ram 1303 are connected to each other via a bus 1304. An Input/Output interface 1305 (i.e., an I/O interface) is also connected to bus 1304.
The following components are connected to the input/output interface 1305: an input section 1306 including a keyboard, a mouse, and the like; an output portion 1307 including a Cathode Ray Tube (CRT), a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), and a speaker, etc.; a storage portion 1308 including a hard disk or the like; and a communication section 1309 including a network interface card such as a local area network card, a modem, or the like. The communication section 1309 performs a communication process via a network such as the internet. The drive 1310 is also connected to the input/output interface 1305 as needed. Removable media 1311, such as magnetic disks, optical disks, magneto-optical disks, semiconductor memory, and the like, is installed as needed on drive 1310 so that a computer program read therefrom is installed as needed into storage portion 1308.
In particular, the processes described in the various method flowcharts may be implemented as computer software programs according to embodiments of the application. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1309 and/or installed from the removable medium 1311. The computer programs, when executed by the central processor 1301, perform the various functions defined in the system of the present application.
It should be noted that, the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-Only Memory (ROM), an erasable programmable read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a flash Memory, an optical fiber, a portable compact disc read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It should be noted that although in the above detailed description several modules or units of a device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functions of two or more modules or units described above may be embodied in one module or unit in accordance with embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into a plurality of modules or units to be embodied.
From the above description of embodiments, those skilled in the art will readily appreciate that the example embodiments described herein may be implemented in software, or may be implemented in software in combination with the necessary hardware. Thus, the technical solution according to the embodiments of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.) or on a network, and includes several instructions to cause a computing device (may be a personal computer, a server, a touch terminal, or a network device, etc.) to perform the method according to the embodiments of the present application.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (8)

1.一种图像样本生成方法,其特征在于,包括:1. A method for generating an image sample, comprising: 建立图像样本像素信息、图像样本符号信息和图像样本的样本编码值的映射关系,形成图像样本映射表;Establishing a mapping relationship between image sample pixel information, image sample symbol information and sample coding values of the image samples to form an image sample mapping table; 将所述图像样本的样本编码值与所述图像样本的名称进行绑定形成键值对;Binding the sample code value of the image sample to the name of the image sample to form a key-value pair; 将所述键值对保存到数组中,形成图像样本库文件;The key-value pairs are saved in an array to form an image sample library file; 获取图像样本库文件,所述图像样本库文件包括图像样本的样本编码值集合,所述样本编码值包括用于表示所述图像样本的十六进制文件;Acquire an image sample library file, wherein the image sample library file includes a set of sample encoding values of image samples, and the sample encoding values include a hexadecimal file for representing the image samples; 提取所述图像样本的样本编码值,利用离屏canvas算法调用所述图像样本映射表,从所述图像样本映射表中获取与所述样本编码值对应的图像样本符号信息;Extracting a sample code value of the image sample, calling the image sample mapping table by using an off-screen canvas algorithm, and obtaining image sample symbol information corresponding to the sample code value from the image sample mapping table; 利用离屏canvas算法将所述图像样本符号信息绘制到画布上,得到第一样本特征;Using an off-screen canvas algorithm to draw the image sample symbol information onto a canvas to obtain a first sample feature; 利用图像数据获取算法将所述第一样本特征转化成图形特征,并对所述图形特征进行裁剪和缩放,得到第二样本特征,所述图像数据获取算法用于获取所述第一样本特征的像素数据;Converting the first sample feature into a graphic feature by using an image data acquisition algorithm, and cropping and scaling the graphic feature to obtain a second sample feature, wherein the image data acquisition algorithm is used to acquire pixel data of the first sample feature; 将所述第二样本特征通过编码算法进行编码之后输出,得到卷积图像样本。The second sample feature is encoded by a coding algorithm and then output to obtain a convolution image sample. 2.根据权利要求1所述的图像样本生成方法,其特征在于,对所述图形特征进行裁剪和缩放,得到第二样本特征,包括:2. The image sample generation method according to claim 1, characterized in that the step of clipping and scaling the graphic feature to obtain the second sample feature comprises: 遍历所述图形特征的像素,获取所述图形特征的最小非空白矩形区域,Traverse the pixels of the graphic feature to obtain the minimum non-blank rectangular area of the graphic feature, 将所述最小非空白矩形区域扩展为非空白正方形区域;Expanding the minimum non-blank rectangular area into a non-blank square area; 根据所述非空白正方形区域对所述图形特征进行裁剪并缩放,得到第二样本特征。The graphic feature is cropped and scaled according to the non-blank square area to obtain a second sample feature. 3.根据权利要求1所述的图像样本生成方法,其特征在于,在将所述第二样本特征通过编码算法进行编码之后输出,得到卷积图像样本之后,所述方法还包括:3. The image sample generation method according to claim 1, characterized in that after the second sample feature is encoded by a coding algorithm and then outputted to obtain a convolution image sample, the method further comprises: 遍历所述图像样本库文件中的图像样本,将所述图像样本转化成卷积图像样本;Traversing the image samples in the image sample library file, and converting the image samples into convolution image samples; 将所述卷积图像样本进行汇聚,得到卷积图像样本集;Aggregating the convolution image samples to obtain a convolution image sample set; 将所述卷积图像样本集转换成json格式文件。Convert the convolution image sample set into a json format file. 4.根据权利要求3所述的图像样本生成方法,其特征在于,在将所述卷积图像样本集转换成json格式文件之后,所述方法还包括:4. The image sample generation method according to claim 3, characterized in that after converting the convolution image sample set into a json format file, the method further comprises: 利用打散算法将所述json格式文件打散,形成随机样本集;The JSON format file is broken up by using a breaking up algorithm to form a random sample set; 挑选所述随机样本集,将所述随机样本集中百分之七十的图像样本作为训练样本,所述随机样本集中百分之二十的图像样本作为验证样本,所述随机样本集中百分之十的图像样本作为测试样本;Select the random sample set, use 70 percent of the image samples in the random sample set as training samples, 20 percent of the image samples in the random sample set as verification samples, and 10 percent of the image samples in the random sample set as test samples; 以所述训练样本作为输入样本,以对应于所述训练样本的样本标签作为输出样本,对预设的卷积神经网络模型进行训练,得到图像识别模型,将所述验证样本和所述测试样本输入到所述图像识别模型中,根据所述验证样本和所述测试样本的已知样本标签和图像识别模型输出的预测样本标签对所述图像识别模型的系数进行调整,使所述图像识别模型输出的预测样本标签与所述验证样本和所述测试样本的已知样本标签一致。The training samples are used as input samples and the sample labels corresponding to the training samples are used as output samples to train a preset convolutional neural network model to obtain an image recognition model. The verification samples and the test samples are input into the image recognition model, and the coefficients of the image recognition model are adjusted according to the known sample labels of the verification samples and the test samples and the predicted sample labels output by the image recognition model, so that the predicted sample labels output by the image recognition model are consistent with the known sample labels of the verification samples and the test samples. 5.一种图像识别方法,其特征在于,包括:5. An image recognition method, comprising: 获取待识别的图像样本;Obtain an image sample to be identified; 将所述待识别的图像样本进行裁剪和缩放,得到待识别样本特征;Cropping and scaling the image sample to be identified to obtain features of the sample to be identified; 将所述待识别样本特征输入到如权利要求4所述的图像识别模型中进行识别,得到所述待识别的图像样本的标签。The features of the sample to be identified are input into the image recognition model as described in claim 4 for identification, so as to obtain the label of the image sample to be identified. 6.一种图像样本生成装置,其特征在于,包括:6. An image sample generating device, characterized in that it comprises: 样本映射单元,被配置为建立图像样本像素信息、图像样本符号信息和图像样本的样本编码值的映射关系,形成图像样本映射表;A sample mapping unit, configured to establish a mapping relationship between image sample pixel information, image sample symbol information and sample encoding values of the image samples to form an image sample mapping table; 绑定单元,被配置为将所述图像样本的样本编码值与所述图像样本的名称进行绑定形成键值对;a binding unit, configured to bind the sample encoding value of the image sample to the name of the image sample to form a key-value pair; 保存单元,被配置为将所述键值对保存到数组,形成图像样本库文件;A saving unit, configured to save the key-value pair into an array to form an image sample library file; 获取模块,被配置为获取图像样本库文件,所述图像样本库文件包括图像样本的样本编码值集合,所述样本编码值包括用于表示所述图像样本的十六进制文件;An acquisition module is configured to acquire an image sample library file, wherein the image sample library file includes a set of sample encoding values of image samples, and the sample encoding values include a hexadecimal file for representing the image samples; 特征提取模块,被配置为提取所述图像样本的样本编码值,基于所述样本编码值,利用离屏canvas算法对与所述样本编码值对应的图像样本进行渲染,得到第一样本特征;所述特征提取模块包括:符号获取单元,被配置为利用离屏canvas算法调用所述图像样本映射表,从所述图像样本映射表中获取与所述样本编码值对应的图像样本符号信息;绘制单元,被配置为利用离屏canvas算法将所述图像样本符号信息绘制到画布上,得到第一样本特征;A feature extraction module is configured to extract a sample code value of the image sample, and based on the sample code value, render the image sample corresponding to the sample code value using an off-screen canvas algorithm to obtain a first sample feature; the feature extraction module includes: a symbol acquisition unit, configured to call the image sample mapping table using an off-screen canvas algorithm, and obtain image sample symbol information corresponding to the sample code value from the image sample mapping table; a drawing unit, configured to draw the image sample symbol information onto a canvas using an off-screen canvas algorithm to obtain a first sample feature; 特征处理模块,被配置为利用图像数据获取算法将所述第一样本特征转化成图形特征,并对所述图形特征进行裁剪和缩放,得到第二样本特征,所述图像数据获取算法用于获取所述第一样本特征的像素数据;a feature processing module configured to convert the first sample feature into a graphic feature by using an image data acquisition algorithm, and to crop and scale the graphic feature to obtain a second sample feature, wherein the image data acquisition algorithm is used to acquire pixel data of the first sample feature; 输出模块,被配置为将所述第二样本特征通过编码算法进行编码之后输出,得到卷积图像样本。The output module is configured to encode the second sample feature through an encoding algorithm and then output it to obtain a convolution image sample. 7.一种计算机可读介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1至4中任意一项所述的图像样本生成方法。7. A computer-readable medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the image sample generating method according to any one of claims 1 to 4 is implemented. 8.一种电子设备,其特征在于,包括:8. An electronic device, comprising: 处理器;以及Processor; and 存储器,用于存储所述处理器的可执行指令;A memory, configured to store executable instructions of the processor; 其中,所述处理器配置为经由执行所述可执行指令来执行权利要求1至4中任意一项所述的图像样本生成方法。The processor is configured to execute the image sample generating method according to any one of claims 1 to 4 by executing the executable instructions.
CN202111187073.2A 2021-10-12 2021-10-12 Image sample generation method, device, computer readable medium and electronic device Active CN113902018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111187073.2A CN113902018B (en) 2021-10-12 2021-10-12 Image sample generation method, device, computer readable medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111187073.2A CN113902018B (en) 2021-10-12 2021-10-12 Image sample generation method, device, computer readable medium and electronic device

Publications (2)

Publication Number Publication Date
CN113902018A CN113902018A (en) 2022-01-07
CN113902018B true CN113902018B (en) 2024-11-15

Family

ID=79191568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111187073.2A Active CN113902018B (en) 2021-10-12 2021-10-12 Image sample generation method, device, computer readable medium and electronic device

Country Status (1)

Country Link
CN (1) CN113902018B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115919273B (en) * 2022-12-07 2024-10-18 北京中电普华信息技术有限公司 Sub-health early warning system and related equipment based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401211A (en) * 2020-03-11 2020-07-10 山东大学 Iris identification method adopting image augmentation and small sample learning
CN111783525A (en) * 2020-05-20 2020-10-16 中国人民解放军93114部队 A method for generating target samples of aerial photography images based on style transfer
CN111860485A (en) * 2020-07-24 2020-10-30 腾讯科技(深圳)有限公司 Training method of image recognition model, and image recognition method, device and equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7421130B2 (en) * 2004-06-25 2008-09-02 Seiko Epson Corporation Method and apparatus for storing image data using an MCU buffer
US8737725B2 (en) * 2010-09-20 2014-05-27 Siemens Aktiengesellschaft Method and system for learning based object detection in medical images
CN109670427B (en) * 2018-12-07 2021-02-02 腾讯科技(深圳)有限公司 Image information processing method and device and storage medium
CN109829501B (en) * 2019-02-01 2021-02-19 北京市商汤科技开发有限公司 Image processing method and device, electronic device and storage medium
CN110543815B (en) * 2019-07-22 2024-03-08 平安科技(深圳)有限公司 Training method of face recognition model, face recognition method, device, equipment and storage medium
CN112560544A (en) * 2019-09-10 2021-03-26 中科星图股份有限公司 Method and system for identifying ground object of remote sensing image and computer readable storage medium
CN111127378A (en) * 2019-12-23 2020-05-08 Oppo广东移动通信有限公司 Image processing method, image processing device, computer equipment and storage medium
CN112712121B (en) * 2020-12-30 2023-12-05 浙江智慧视频安防创新中心有限公司 Image recognition model training method, device and storage medium
CN112866577B (en) * 2021-01-20 2022-05-27 腾讯科技(深圳)有限公司 Image processing method and device, computer readable medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401211A (en) * 2020-03-11 2020-07-10 山东大学 Iris identification method adopting image augmentation and small sample learning
CN111783525A (en) * 2020-05-20 2020-10-16 中国人民解放军93114部队 A method for generating target samples of aerial photography images based on style transfer
CN111860485A (en) * 2020-07-24 2020-10-30 腾讯科技(深圳)有限公司 Training method of image recognition model, and image recognition method, device and equipment

Also Published As

Publication number Publication date
CN113902018A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN111553131B (en) PSD file analysis method, device, equipment and readable storage medium
US20250077761A1 (en) Character generation method and apparatus, electronic device, and storage medium
CN113781356B (en) Training method of image denoising model, image denoising method, device and equipment
US9117314B2 (en) Information output apparatus, method, and recording medium for displaying information on a video display
CN114820885B (en) Image editing method and model training method, device, device and medium thereof
CN112395834A (en) Brain graph generation method, device and equipment based on picture input and storage medium
JP7309811B2 (en) Data annotation method, apparatus, electronics and storage medium
CN111915705B (en) Picture visual editing method, device, equipment and medium
JP2023039892A (en) Training method for character generation model, character generating method, device, apparatus, and medium
CN116468970A (en) Model training method, image processing method, device, equipment and medium
CN116954450A (en) Screenshot method and device for front-end webpage, storage medium and terminal
CN113902018B (en) Image sample generation method, device, computer readable medium and electronic device
CN118643342A (en) Sample pair generation, large model training, image retrieval method and device, equipment and medium
CN114791989B (en) A browser-based PSD file parsing method, system, and storage medium
CN110162301B (en) Form rendering method, form rendering device and storage medium
CN115268904A (en) User interface design file generation method, device, equipment and medium
CN114331932B (en) Target image generation method and device, computing device and computer storage medium
CN113407189B (en) Picture material processing method and device, storage medium and electronic equipment
CN116071760A (en) Training model, character recognition method apparatus, device, and storage medium
CN115935909A (en) File generation method and device and electronic equipment
CN115329720A (en) Document display method, device, equipment and storage medium
CN113703745B (en) Page generation method and device, computer-readable storage medium, and electronic device
CN114485716B (en) Lane rendering method, device, electronic device and storage medium
CN117934652A (en) Poster generation method and device, computer readable storage medium and server
WO2025119178A1 (en) Video generation method and apparatus, device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant