CN110414633B - System and recognition method for handwritten font recognition - Google Patents
System and recognition method for handwritten font recognition Download PDFInfo
- Publication number
- CN110414633B CN110414633B CN201910598723.9A CN201910598723A CN110414633B CN 110414633 B CN110414633 B CN 110414633B CN 201910598723 A CN201910598723 A CN 201910598723A CN 110414633 B CN110414633 B CN 110414633B
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolutional neural
- layer
- recognition
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/24—Character recognition characterised by the processing or recognition method
- G06V30/242—Division of the character sequences into groups prior to recognition; Selection of dictionaries
- G06V30/244—Division of the character sequences into groups prior to recognition; Selection of dictionaries using graphical properties, e.g. alphabet type or font
- G06V30/2455—Discrimination between machine-print, hand-print and cursive writing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Psychiatry (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及字体识别器件,具体涉及一种用于手写字体识别的系统及识别方法。The invention relates to a font recognition device, in particular to a system and a recognition method for handwritten font recognition.
背景技术Background technique
最早的文字识别起源于50年代初期的美国,其作为工业运用中极为重要的部分,字符识别的发展一直备受关注。The earliest character recognition originated in the United States in the early 1950s. As an extremely important part of industrial applications, the development of character recognition has always attracted much attention.
目前字符识别主要运用光学字符识别(OCR),一般流程为:通过电子设备(扫描仪,相机等)检查纸上的字符,利用特定的二值化处理为黑白图像,之后去噪声,对文档进行倾斜校正,对问导航图片进行分段分行等版面分析,之后进行字符分割,将分割后的单个字符输入特定的字符识别程序比如卷积神经网络,SVM支持向量机等,最后输出识别的字符。At present, character recognition mainly uses Optical Character Recognition (OCR). The general process is: check the characters on the paper through electronic equipment (scanner, camera, etc.) Tilt correction, perform layout analysis such as segmenting and branching on the navigation picture, and then perform character segmentation, input the segmented single character into a specific character recognition program such as convolutional neural network, SVM support vector machine, etc., and finally output the recognized character.
目前该领域中由于利用的是拍摄得到图像,自然存在像素失真的问题。At present, in this field, the problem of pixel distortion naturally exists due to the use of images obtained by shooting.
发明内容SUMMARY OF THE INVENTION
发明目的:本发明的目的是提供一种用于手写字体识别的系统及识别方法,解决容易受环境影响,识别率不稳定的问题。Purpose of the invention: The purpose of the present invention is to provide a system and a recognition method for handwritten font recognition, so as to solve the problems of being easily affected by the environment and having an unstable recognition rate.
技术方案:本发明所述的用于手写字体识别的系统,包括柔性可穿戴手环、传感器信号获取处理单元和卷积神经网络单元,所述柔性可穿戴手环与传感器信号获取单元和卷积神经网络单元均电连接。Technical solution: The system for handwritten font recognition according to the present invention includes a flexible wearable wristband, a sensor signal acquisition and processing unit and a convolutional neural network unit, the flexible wearable wristband and the sensor signal acquisition unit and a convolutional neural network unit. The neural network units are all electrically connected.
所述柔性可穿戴手环包括柔性基底,所述柔性基底上设置有压力形变传感器,所述柔性基底上覆盖有绝缘封装层。The flexible wearable wristband includes a flexible substrate on which a pressure deformation sensor is arranged, and an insulating encapsulation layer is covered on the flexible substrate.
本发明所述的用于手写字体识别的系统的识别方法,包括以下步骤:The recognition method of the system for handwritten font recognition according to the present invention comprises the following steps:
(1)柔性可穿戴手环通过传感器电阻的变化获取手腕弯曲运动时特征点处手腕动力学参数信息;(1) The flexible wearable wristband obtains the wrist dynamic parameter information at the characteristic points when the wrist bends and moves through the change of the sensor resistance;
(2)传感器信号获取处理单元将获得的手腕动力学参数信息转化为电响应信号并输出;(2) The sensor signal acquisition processing unit converts the acquired wrist dynamic parameter information into electrical response signals and outputs them;
(3)卷积神经网络单元对神经网络模型进行训练,得到最优手写体卷积神经网络判别模型,然后电响应信号输入到最优手写体卷积神经网络判别模型中,经过网络中间层预测后实现对手写字体的识别。(3) The convolutional neural network unit trains the neural network model to obtain the optimal handwritten convolutional neural network discriminant model, and then the electrical response signal is input into the optimal handwritten convolutional neural network discriminant model, which is predicted after the middle layer of the network. Recognition of handwritten fonts.
其中,所述步骤(3)中具体为:Wherein, in the described step (3), it is specifically:
(a)利用不同人书写,以不同的握笔姿势,不同的力道,以及不同的字体书写统一文字,构建训练神经网络所用的数据集;(a) Use different people to write with different pen-holding postures, different strengths, and different fonts to write unified characters, and build a dataset for training the neural network;
(b)构建卷积神经网络模型的结构,并设置神经网络的输出单元个数为T,采用网络数据集进行预训练来初始化权重参数;(b) constructing the structure of the convolutional neural network model, and setting the number of output units of the neural network to T, and using the network data set for pre-training to initialize the weight parameters;
(c)将构建的数据集输入到构建的卷积神经网络模型中,进行训练来优化权重参数,最终得到最优手写体卷积神经网络判别模型;(c) Input the constructed data set into the constructed convolutional neural network model, perform training to optimize the weight parameters, and finally obtain the optimal handwritten convolutional neural network discriminant model;
(d)将传感器信号获取处理单元得到的电响应信号输入到最优手写体卷积神经网络判别模型中,输出判别结果。(d) Input the electrical response signal obtained by the sensor signal acquisition and processing unit into the optimal handwritten convolutional neural network discrimination model, and output the discrimination result.
所述步骤(a)中的数据集格式为(source,target),source为采集处理过的特征点处手腕动力学参数信息述target为采用one-hot编码格式编码的手写字体。The data set format in the step (a) is (source, target), where the source is the wrist dynamics parameter information at the feature points that have been collected and processed, and the target is a handwritten font encoded in one-hot encoding format.
所述步骤(c)中训练过程中采用交叉验证来检验模型是否发生过拟合,采用交叉熵损失函数softmax作为网络的误差损失函数来进行反向传播训练,直到误差连续N训练周期不再下降为止,从而得到最优手写体判别模型。In the step (c), in the training process, cross-validation is used to check whether the model is over-fitted, and the cross-entropy loss function softmax is used as the error loss function of the network to carry out back-propagation training, until the error no longer decreases for consecutive N training cycles. So far, the optimal handwriting discriminant model is obtained.
有益效果:本发明基于柔性可穿戴手环的手写字体识别,消除看环境光和分辨率的问题,更稳定,成本更低;本发明借助卷积神经网络,用训练好的模型识别手写字体,用时时间短,识别率高,鲁棒性更强;本发明可以通过灵活的修改输出层结构,通过迁移学习,可以适用于各种特定的场合,比如特定手写字体识别。Beneficial effects: the present invention is based on the handwritten font recognition of the flexible wearable bracelet, eliminates the problem of viewing ambient light and resolution, is more stable, and has a lower cost; the present invention uses a convolutional neural network to recognize handwritten fonts with a trained model, The time is short, the recognition rate is high, and the robustness is stronger; the present invention can flexibly modify the output layer structure and transfer learning, and can be applied to various specific occasions, such as specific handwritten font recognition.
附图说明Description of drawings
图1为本发明的模块示意图;Fig. 1 is the module schematic diagram of the present invention;
图2为柔性可穿戴手环的结构示意图;2 is a schematic structural diagram of a flexible wearable wristband;
图3为电响应获取模块的流程图;Fig. 3 is the flow chart of the electrical response acquisition module;
图4为神经网络单元的流程图;Fig. 4 is the flow chart of neural network unit;
图5为本发明的工作流程图。FIG. 5 is a working flow chart of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明进行进一步说明。The present invention will be further described below with reference to the accompanying drawings.
如图1-2所示,例如实现对汉字“东”,“南”,“大”,“学”的识别,可用于手写字体识别的系统,包括:As shown in Figure 1-2, for example, to realize the recognition of Chinese characters "east", "south", "big" and "learning", the system that can be used for handwritten font recognition includes:
(1)柔性可穿戴手环(1) Flexible wearable bracelet
柔性可穿戴手环能够通过传感器电阻的变化实现手腕弯曲运动时三个特征点处手腕动力学参数信息的获取,为实现以上功能,通过在手环的三个特征点A、B和C处放置柔性压力/形变传感器实现对特征点信息的获取,从而实现对表征手腕运动时动力参数的获取。The flexible wearable wristband can obtain the wrist dynamic parameter information at the three characteristic points when the wrist is bent through the change of the sensor resistance. The flexible pressure/deformation sensor realizes the acquisition of feature point information, thereby realizing the acquisition of dynamic parameters that characterize wrist movement.
柔性可穿戴手环由压力/形变传感器2、VHB双面水凝胶柔性基底层1和绝缘封装层组成3,其中,压力/形变传感器2为MXene和多壁碳纳米管(MWCNTs)三维交联网络。The flexible wearable wristband is composed of a pressure/
当压力/形变传感器受压力等产生形变后,MXene和多壁碳纳米管(MWCNTs)构成的三维交联网络会产生横向裂缝,从而使导电通路降低,导电阻力增大,电阻增大,从而实现利用电阻增大的程度表征手腕的动力学状态。When the pressure/deformation sensor is deformed by pressure, the three-dimensional cross-linked network composed of MXene and multi-walled carbon nanotubes (MWCNTs) will generate lateral cracks, thereby reducing the conduction path, increasing the conduction resistance, and increasing the resistance. The degree of resistance increase is used to characterize the dynamic state of the wrist.
柔性可穿戴手环的制备包括以下步骤:将VHB双面水凝胶切割成固定长方体,长度为18cm宽度为1cm,厚度为5mm;利用激光雕刻机在掩模板的固定位置雕刻三个特征点,掩模版的长度为18.2cm,宽度为1.2cm,厚度为6mm,三个特征点的位置为手腕两侧和下侧中间,之后,将掩模板固定在VHB双面水凝胶柔性基底层上;将配置好的MXene和多壁碳纳米管分散液利用喷涂工艺喷涂在掩模板上,重复5-6次后,结束喷涂;在VHB双面水凝胶柔性基底层上另一侧喷镀PVP水溶液,作为封装层和电绝缘层;从VHB双面水凝胶柔性基底层上剥离掩模板,并在三个特征点出引出导线,作为导电层;The preparation of the flexible wearable bracelet includes the following steps: cutting the VHB double-sided hydrogel into a fixed cuboid with a length of 18cm, a width of 1cm and a thickness of 5mm; using a laser engraving machine to engrave three feature points on a fixed position of the mask, The length of the mask is 18.2cm, the width is 1.2cm, and the thickness is 6mm. The positions of the three feature points are the middle of both sides of the wrist and the lower side. After that, the mask is fixed on the VHB double-sided hydrogel flexible base layer; The prepared MXene and multi-walled carbon nanotube dispersions are sprayed on the mask plate by spraying process, and the spraying is finished after 5-6 repetitions; PVP aqueous solution is sprayed on the other side of the VHB double-sided hydrogel flexible base layer , as an encapsulation layer and an electrical insulating layer; peel off the mask from the VHB double-sided hydrogel flexible base layer, and draw out lead wires at three feature points as a conductive layer;
(2)传感器信号获取处理单元(2) Sensor signal acquisition processing unit
传感器信号获取处理单元包括获取电信号sig_1,sig_2,sig_3的电响应获取模块和处理电响应信号sig_1,sig_2,sig_3的电响应处理模块。如图3所示,当AI芯片通过模数转换器检测到电信号sig_1,sig_2,sig_3其中之一发生改变时,将控制模数转换器采集电信号sig_1,sig_2,sig_3,存储到缓存器中,直到电信号sig_1,sig_2,sig_3三者都不发生改变后,结束采集工作,其中,统计采集点数为W,之后经过电响应处理模块处理后,经过矩阵拼接,输出处理之后的电信号。The sensor signal acquisition and processing unit includes an electrical response acquisition module for acquiring the electrical signals sig_1, sig_2, sig_3 and an electrical response processing module for processing the electrical response signals sig_1, sig_2, sig_3. As shown in Figure 3, when the AI chip detects that one of the electrical signals sig_1, sig_2, and sig_3 has changed through the analog-to-digital converter, it will control the analog-to-digital converter to collect the electrical signals sig_1, sig_2, and sig_3, and store them in the buffer. , until the electrical signals sig_1, sig_2, and sig_3 remain unchanged, the collection work is ended, wherein the number of statistical collection points is W, and after processing by the electrical response processing module, the processed electrical signals are output through matrix splicing.
电信号处理模块利用记录的初始电压信号U0,将电响应获取模块获取到的电压信号处理为(Ui-U0)/U0,其中,Ui为三个特征点处获取的电压信号,i=1,2,…,W。将处理后的sig_1,sig_2和sig_3拼接为二维矩阵,其维度为[W,3],其中,第二维第一通道为处理后的sig_1电信号,第二维第二通道为处理后的sig_2电信号,第二维第三通道为处理后的sig_3电信号。最后,将在每一行的最后添加0直到矩阵每一行的长度补齐为2000。The electrical signal processing module uses the recorded initial voltage signal U 0 to process the voltage signal obtained by the electrical response acquisition module as (U i -U 0 )/U 0 , where U i is the voltage signal obtained at the three characteristic points , i=1,2,...,W. The processed sig_1, sig_2 and sig_3 are spliced into a two-dimensional matrix, the dimension of which is [W, 3], where the first channel of the second dimension is the processed sig_1 electrical signal, and the second channel of the second dimension is the processed electrical signal. sig_2 electrical signal, the third channel of the second dimension is the processed sig_3 electrical signal. Finally, zeros will be added at the end of each row until the length of each row of the matrix is padded to 2000.
(3)卷积神经网络单元(3) Convolutional neural network unit
为实现较高的识别率,本发明借助具有特殊结构的卷积神经网络。根据该实例,该网络输入为[2000,3],softmax分类层为4维向量,即4个分类器。In order to achieve a higher recognition rate, the present invention uses a convolutional neural network with a special structure. According to this example, the network input is [2000, 3] and the softmax classification layer is a 4-dimensional vector, i.e. 4 classifiers.
卷积神经网络的结构如下:The structure of the convolutional neural network is as follows:
[2000,3]二维张量输入->卷积层->Relu->池化层->卷积层->ReLu->池化层->卷积层->ReLu->池化层->->卷积层->ReLu->池化层->全连接层1->全连接层2->softmax层[4]维张量输出。[2000,3] Two-dimensional tensor input->convolutional layer->Relu->pooling layer->convolutional layer->ReLu->pooling layer->convolutional layer->ReLu->pooling layer- >->Convolutional layer->ReLu->Pooling layer->Fully connected layer1->Fully connected layer2->softmax layer[4] dimensional tensor output.
为了进一步优化网络结构,避免过拟合现象,以及加速训练,卷积层采用长度为3的一维卷积核;池化层采用核长度为2的一维平均池化;采用Dropout层对模型进行约束,其中第一层卷积层dropout取值0.9,第二层卷积层dropout取值0.8,第三层卷积层dropout取值0.7,第四层卷积层的dropout取值为0.6,全连接层1和全连接层2的dropout取值为0.5。In order to further optimize the network structure, avoid overfitting, and speed up training, the convolutional layer adopts a one-dimensional convolution kernel with a length of 3; the pooling layer adopts a one-dimensional average pooling with a kernel length of 2; Constraints, where the dropout value of the first convolutional layer is 0.9, the dropout value of the second convolutional layer is 0.8, the dropout value of the third convolutional layer is 0.7, and the dropout value of the fourth convolutional layer is 0.6. The dropout value of fully connected
在网络训练过程中,采用交叉验证来检验模型是否发生过拟合,具体流程如下:将训练数据分为一个或多个数据集,用一部分数据集训练模型,另一部分验证模型准确度,如果分类结果在训练集合和测试集合上相差很多,则产生了过拟合。在此我们将数据库中的样本分为了5个子数据集,将每一个字数据集分别作为测试样本,其它4个字数据集作为训练样本。这样得到5个分类器,5个测试结果,用这5个结果的平均值来衡量我们提出的深度可分解卷积神经分类网络模型的压阻匹配性能。In the process of network training, cross-validation is used to check whether the model is overfitting. The specific process is as follows: divide the training data into one or more data sets, train the model with a part of the data set, and verify the accuracy of the model with the other part. The results are very different on the training set and the test set, resulting in overfitting. Here we divide the samples in the database into 5 sub-data sets, each word data set is used as a test sample, and the other 4 word data sets are used as training samples. In this way, 5 classifiers and 5 test results are obtained, and the average of these 5 results is used to measure the piezoresistive matching performance of our proposed deep decomposable convolutional neural classification network model.
如图4所示,该卷积神经网络单元工作流程主要包括以下步骤:利用不同人书写,以不同的握笔姿势,不同的力道,以及不同的字体书写“东”,“南”,“大”,“学”四个字,构建训练神经网络所用的数据集。其中,数据格式为(source,target)。source的维度为[2000,3],target为监督学习过程中的y值,其中所有target的类别采用one-hot编码格式编码,target经过one-hot编码后的格式对应如下表1所示:As shown in Figure 4, the work flow of the convolutional neural network unit mainly includes the following steps: using different people to write, with different pen holding postures, different strengths, and different fonts to write "east", "south", "big" ", "Learn" four words, construct the data set used for training the neural network. Among them, the data format is (source, target). The dimension of the source is [2000,3], and the target is the y value in the supervised learning process. All target categories are encoded in one-hot encoding format, and the format of the target after one-hot encoding corresponds to the following table 1:
表1 target经过one-hot编码后的格式Table 1 Format of target after one-hot encoding
构建卷进神经网络模型的结构,并设置神经网络的输出单元个数为4,采用网络数据集进行预训练来初始化权重参数。将构建的数据集输入到构建的卷积神经网络中进行训练来优化权重参数。采用交叉验证来检验模型是否发生过拟合,以及作为网络的误差来进行反向传播训练,直到误差连续5个训练周期不再下降为止,从而得到最优手写体判别模型,最后将AI芯片控制模数转换模块获取处理的信号输入到步骤3中训练得到的最优手写体卷积神经网络判别模型中,输出判别结果。Construct the structure of the convolutional neural network model, set the number of output units of the neural network to 4, and use the network data set for pre-training to initialize the weight parameters. The constructed dataset is input into the constructed convolutional neural network for training to optimize the weight parameters. Cross-validation is used to check whether the model has been over-fitted, and back-propagation training is performed as the error of the network until the error does not decrease for 5 consecutive training cycles, so as to obtain the optimal handwriting discrimination model. Finally, the AI chip controls the model. The signal obtained and processed by the digital conversion module is input into the optimal handwritten convolutional neural network discrimination model trained in
(4)集成电路设计模块(4) Integrated circuit design module
集成电路设计模块主要实现为柔性可穿戴手环供电,控制模数转换模块采集三个特征点信号,并由AI芯片实现特征电信号的处理,以及神经网络的正常工作。The integrated circuit design module mainly realizes power supply for the flexible wearable bracelet, controls the analog-to-digital conversion module to collect three characteristic point signals, and uses the AI chip to realize the processing of characteristic electrical signals and the normal operation of the neural network.
如图5所示,本发明工作流程为:As shown in Figure 5, the workflow of the present invention is:
1)数据库构建,利用传感器信号获取处理单元单元获取在特定手势下柔性可穿戴手环的输出信号sig_1,sig_2,sig_3,处理为[2000,3]的矩阵,作为数据库单元的source,对应的手写字体作为数据库为target。提取一定数量的(source,target)构建数据库,其中每类target采集50次。1) Database construction, use the sensor signal acquisition processing unit to obtain the output signals sig_1, sig_2, sig_3 of the flexible wearable bracelet under a specific gesture, process it as a matrix of [2000, 3], as the source of the database unit, the corresponding handwriting The font is targeted as a database. Extract a certain number of (source, target) to build the database, in which each type of target is collected 50 times.
2)利用数据库单元中的数据,采用迁移学习来训练构建的预训练过的卷积神经网络,并采用交叉验证来检验模型是否发生过拟合。2) Using the data in the database unit, transfer learning is used to train the constructed pre-trained convolutional neural network, and cross-validation is used to check whether the model is overfitting.
3)AI芯片通过动态信号获取处理单元经由模数转换模块从柔性压力传感器阵列获取电输出信号sig_1,sig_2,sig_3并处理,接下来将数据传递给数据训练神经网络判别单元进行压力匹配的识别。3) The AI chip obtains and processes the electrical output signals sig_1, sig_2, sig_3 from the flexible pressure sensor array through the analog-to-digital conversion module through the dynamic signal acquisition processing unit, and then transmits the data to the data training neural network discrimination unit for pressure matching identification.
4)神经网络输出的值经过one-hot逆编码即为识别的手写字体。4) The value output by the neural network is the recognized handwritten font after one-hot inverse encoding.
Claims (4)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910598723.9A CN110414633B (en) | 2019-07-04 | 2019-07-04 | System and recognition method for handwritten font recognition |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910598723.9A CN110414633B (en) | 2019-07-04 | 2019-07-04 | System and recognition method for handwritten font recognition |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN110414633A CN110414633A (en) | 2019-11-05 |
| CN110414633B true CN110414633B (en) | 2022-10-14 |
Family
ID=68360284
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201910598723.9A Active CN110414633B (en) | 2019-07-04 | 2019-07-04 | System and recognition method for handwritten font recognition |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN110414633B (en) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110864828B (en) * | 2019-11-08 | 2021-05-28 | 五邑大学 | A kind of preparation method of silver nanowire/MXene flexible stress sensor |
| CN113903043B (en) * | 2021-12-11 | 2022-05-06 | 绵阳职业技术学院 | Method for identifying printed Chinese character font based on twin metric model |
| CN114519374B (en) * | 2022-02-15 | 2025-02-07 | 浙江工业大学 | A handwritten letter signal recognition system and method based on flexible sensor |
| CN116597458B (en) * | 2023-07-14 | 2023-09-08 | 厦门达宸信教育科技有限公司 | Handwritten letter recognition method, system and application |
| CN118602925A (en) * | 2024-08-08 | 2024-09-06 | 山东科技大学 | Flexible strain sensor based on multi-component solid powder mixing and preparation method thereof |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3142545A1 (en) * | 2014-05-15 | 2017-03-22 | Nokia Technologies OY | An apparatus, method and computer program for a wearable device |
| CN108267078A (en) * | 2018-03-18 | 2018-07-10 | 吉林大学 | A kind of flexible wearable resistance strain and preparation method thereof |
| CN108428566A (en) * | 2018-01-23 | 2018-08-21 | 浙江工业大学 | A kind of high efficiency preparation method of the planar miniature electrode of super capacitor of interdigital structure |
| CN109238522A (en) * | 2018-09-21 | 2019-01-18 | 南开大学 | A kind of wearable flexibility stress sensor and its preparation method and application |
Family Cites Families (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102812541B (en) * | 2011-03-24 | 2016-02-03 | 松下知识产权经营株式会社 | The image display device of flexible semiconductor device and manufacture method and use flexible semiconductor device and manufacture method thereof |
| CN106648050A (en) * | 2016-09-20 | 2017-05-10 | 浙江理工大学 | Multimedia computer gesture control system and control method based on flexible electronic skin |
| CN107085730A (en) * | 2017-03-24 | 2017-08-22 | 深圳爱拼信息科技有限公司 | A kind of deep learning method and device of character identifying code identification |
| CN107644006B (en) * | 2017-09-29 | 2020-04-03 | 北京大学 | An automatic generation method of handwritten Chinese character library based on deep neural network |
| CN109549649A (en) * | 2018-11-19 | 2019-04-02 | 东南大学 | A kind of movable wearable device of detection neck |
| CN109753566B (en) * | 2019-01-09 | 2020-11-24 | 大连民族大学 | Model training method for cross-domain sentiment analysis based on convolutional neural network |
-
2019
- 2019-07-04 CN CN201910598723.9A patent/CN110414633B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP3142545A1 (en) * | 2014-05-15 | 2017-03-22 | Nokia Technologies OY | An apparatus, method and computer program for a wearable device |
| CN108428566A (en) * | 2018-01-23 | 2018-08-21 | 浙江工业大学 | A kind of high efficiency preparation method of the planar miniature electrode of super capacitor of interdigital structure |
| CN108267078A (en) * | 2018-03-18 | 2018-07-10 | 吉林大学 | A kind of flexible wearable resistance strain and preparation method thereof |
| CN109238522A (en) * | 2018-09-21 | 2019-01-18 | 南开大学 | A kind of wearable flexibility stress sensor and its preparation method and application |
Non-Patent Citations (2)
| Title |
|---|
| "CapBand: Battery-free Successive Capacitance Sensing Wristband for Hand Gesture Recognition";Hoang Truong等;《SenSys "18: Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems》;20181104;全文 * |
| "应用于可穿戴电子设备的柔性拉伸电路的制备";汪明月等;《中国力学大会-2017暨庆祝中国力学学会成立60周年大会论文集(C)》;20170831;全文 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN110414633A (en) | 2019-11-05 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110414633B (en) | System and recognition method for handwritten font recognition | |
| CN113326764B (en) | Method and device for training image recognition model and image recognition | |
| Abraham et al. | Real-time translation of Indian sign language using LSTM | |
| CN115620312B (en) | Cross-modal character handwriting verification method, system, equipment and storage medium | |
| CN108805222A (en) | A kind of deep learning digital handwriting body recognition methods based on ARM platforms | |
| Asthana et al. | Handwritten multiscript numeral recognition using artificial neural networks | |
| CN114220178A (en) | Signature identification system and method based on channel attention mechanism | |
| Kaluri et al. | A framework for sign gesture recognition using improved genetic algorithm and adaptive filter | |
| CN111753802A (en) | Identification method and device | |
| CN114255371B (en) | A small sample image classification method based on component-supervised network | |
| Kola et al. | Facial expression recognition using singular values and wavelet‐based LGC‐HD operator | |
| CN112926462B (en) | Training method and device, action recognition method and device and electronic equipment | |
| CN116563862A (en) | A Number Recognition Method Based on Convolutional Neural Network | |
| Hemanth et al. | CNN-RNN BASED HANDWRITTEN TEXT RECOGNITION. | |
| Rawf et al. | Effective Kurdish sign language detection and classification using convolutional neural networks | |
| CN118354249B (en) | Earphone pressure sensing control method and medium based on accurate point contact | |
| Uzair et al. | Automated Netlist generation from offline hand-drawn circuit diagrams | |
| Arnia et al. | Moment invariant-based features for Jawi character recognition | |
| CN112507863B (en) | Handwritten character and picture classification method based on quantum Grover algorithm | |
| Yogesh et al. | Artificial intelligence based handwriting digit recognition (hdr)-a technical review | |
| CN108960275A (en) | A kind of image-recognizing method and system based on depth Boltzmann machine | |
| CN118210493A (en) | Low-code platform development method and system based on image recognition | |
| WO2024103997A1 (en) | Handwriting recognition method and handwriting recognition model training method and apparatus | |
| CN115311664A (en) | Method, device, medium and equipment for identifying text type in image | |
| CN111046951A (en) | Medical image classification method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |