CN102194118B - Method and device for extracting information from image - Google Patents
Method and device for extracting information from image Download PDFInfo
- Publication number
- CN102194118B CN102194118B CN 201010117062 CN201010117062A CN102194118B CN 102194118 B CN102194118 B CN 102194118B CN 201010117062 CN201010117062 CN 201010117062 CN 201010117062 A CN201010117062 A CN 201010117062A CN 102194118 B CN102194118 B CN 102194118B
- Authority
- CN
- China
- Prior art keywords
- region
- frame
- area
- determining
- dialogue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a method and device for extracting information from an image, wherein the method and device provided by the invention are used for improving the extraction efficiency of a dialogue frame and a dialogue text. The method comprises the following steps: performing detection of background color connected regions on a region to be selected in the image, and determining at lease one background color connected region as a dialogue frame candidate region; determining a dialogue frame region from the dialogue frame candidate region according to the feature information of each dialogue frame candidate region; performing gradual expansion on the boundary of the dialogue frame region, and determining the side frame boundary of the dialogue frame region; and determining the side frame boundary and the region contained in the side frame boundary as a dialogue frame complete region, and extracting the region corresponding to the dialogue frame complete region from the image for getting the dialogue frame.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for extracting information from an image.
Background
With the widespread use of mobile terminals, such as cell phones, more and more users are reading with mobile terminals. Generally, the display screen of a mobile terminal is small, so that when a cartoon stored in a computer is transplanted to a mobile phone, the whole cartoon needs to be reduced, but the whole reduction can distort or even make the dialogue in the dialogue frame in the cartoon unrecognizable, and therefore the dialogue frame and the dialogue in the dialogue frame need to be extracted first, and then special processing is performed, so that the dialogue in the reduced image is clear and readable. In addition, when a cartoon foreign language book is published, the problem that the original dialect characters need to be replaced is also encountered. Therefore, for the cartoon image, extraction of the white frame and the white text is necessary.
In the prior art, the dialogue characters and the dialogue frame are generally extracted from the cartoon image manually by means of a photo shot graphics processing tool, and the extracted dialogue frame is manually filled and repaired. It can be seen that this method is inefficient and, due to human factors, the integrity and accuracy of the extracted dialog box and the dialog text are not high enough.
Disclosure of Invention
The embodiment of the invention provides a method and a device for extracting information from an image, which are used for improving the extraction efficiency of a dialogue frame and a dialogue character.
The embodiment of the invention provides a method for extracting information from an image, which comprises the following steps:
performing background ground color connected domain detection on the to-be-selected region in the image, and determining at least one background ground color connected region as a dialogue frame candidate region;
determining a dialogue frame region from the dialogue frame candidate region according to the characteristic information of the dialogue frame candidate region;
detecting background ground color connected regions according to values corresponding to all pixel points in the region to be selected before morphological closing operation is not carried out, and acquiring initial background ground color connected regions corresponding to the paired white frame regions; the value corresponding to each pixel point in the region to be selected is obtained by carrying out binarization processing on the image of the region to be selected;
performing difference operation on the initial background color connected region and the dialogue frame region to obtain at least one connected region;
determining a first threshold according to the area of the opposite white frame region, comparing the area of each connected domain with the first threshold and a preset second threshold, when the area of each connected domain is smaller than or equal to the first threshold and larger than or equal to the second threshold, determining that the connected domain is a sharp corner region of a lost opposite white frame, and supplementing the sharp corner region to the opposite white frame region to obtain a corrected opposite white frame region;
gradually expanding the boundary of the corrected dialogue frame area according to the set times of expansion of the boundary of the corrected dialogue frame area, and determining the frame boundary of the corrected dialogue frame area;
and determining the frame boundary and the area contained by the frame boundary as a white frame matching complete area, and extracting the area corresponding to the white frame matching complete area in the image to obtain a white frame.
The embodiment of the invention provides a device for extracting information from an image, which comprises:
a candidate region determining unit, configured to perform background ground color connected region detection on a region to be selected in the image, and determine at least one background ground color connected region as a dialogue frame candidate region; and a process for the preparation of a coating,
the candidate area determining unit comprises an obtaining subunit, which is used for carrying out binarization processing on the image of the area to be selected and obtaining the value corresponding to each pixel point in the area to be selected;
a dialogue frame area determining unit, configured to determine a dialogue frame area from the dialogue frame candidate area according to feature information of the dialogue frame candidate area;
the white frame region determining unit further comprises an acquiring subunit, a difference subunit and a supplementing subunit:
the acquisition subunit is configured to perform background ground color connected domain detection according to a value corresponding to each pixel point in the to-be-selected region before morphological closing operation is not performed, and acquire an initial background ground color connected domain corresponding to the dialog box region;
the difference subunit is configured to perform difference operation on the initial background ground color connected region and the dialogue frame region to obtain at least one connected region;
the supplementing subunit is configured to determine a first threshold according to the area of the opposite white frame region, compare the area of each connected domain with the first threshold and a preset second threshold, determine that the connected domain is a sharp corner region of a lost opposite white frame when the area of the connected domain is smaller than or equal to the first threshold and is greater than or equal to the second threshold, and supplement the sharp corner region to the opposite white frame region to obtain a corrected opposite white frame region;
a frame boundary determining unit, configured to gradually expand the boundary of the corrected dialog box region according to the number of times set for expanding the boundary of the corrected dialog box region, and determine the frame boundary of the corrected dialog box region;
and the dialogue frame extraction unit is used for determining the frame boundary and the area contained by the frame boundary as a dialogue frame complete area, and extracting the area corresponding to the dialogue frame complete area in the image to obtain a dialogue frame.
In the embodiment of the invention, background color connected region detection is carried out on a region to be selected in an image, at least one background color connected region is determined as a dialogue frame candidate region, a dialogue frame region is determined from the dialogue frame candidate region according to the characteristic information of each dialogue frame candidate region, the boundary of the dialogue frame region is gradually expanded, the border of the dialogue frame region is determined, the border and the region contained by the border are determined as a dialogue frame complete region, and the region corresponding to the dialogue frame complete region in the image is extracted. Therefore, the white frame is automatically detected in the image, and the extraction efficiency is improved.
Drawings
FIG. 1 is a flow chart of extracting information from an image according to an embodiment of the present invention;
FIG. 2 is a graph showing the variation of the frame ratio and the expansion times according to the embodiment of the present invention;
FIG. 3 is a flow chart of extracting information from an image according to another embodiment of the present invention;
FIG. 4 is a schematic diagram of an image according to a first embodiment of the present invention;
FIG. 5 is a flowchart illustrating a process of extracting information from an image according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating contrast characters extracted from an image according to a first embodiment of the present invention;
FIG. 7 is a diagram of a dialog box after color filling according to a first embodiment of the present invention;
FIG. 8 is a block diagram of an apparatus for extracting information from an image according to an embodiment of the present invention;
FIG. 9 is a block diagram of an apparatus for extracting information from an image according to another embodiment of the present invention.
Detailed Description
An embodiment of the present invention provides a method for extracting information from an image, where the information includes: a dialogue box, or a dialogue text. Referring to fig. 1, the process of extracting information from an image according to an embodiment of the present invention includes:
step 101: and performing background ground color connected domain detection on the to-be-selected region in the image, and determining at least one background ground color connected region as a dialogue frame candidate region.
In the embodiment of the invention, the image may be a gray-scale image or a color image. And the background color of the white frame in the image may be bright or dark. For example: in a color image, the background color of the white frame may be one or a combination of bright colors such as white, yellow, and blue, and of course, the background color of the white frame may be one or a combination of dark colors such as black, purple, and red. In a gray-scale image, the background to the white frame may be white or black.
Thus, determining the candidate region for the white frame includes:
firstly, selecting an area from an image as a candidate area, then carrying out binarization processing on the image of the candidate area to obtain a value corresponding to each pixel point in the candidate area, carrying out background ground color connected domain detection according to the value corresponding to each pixel point to obtain one or more binarized background ground color connected areas, and finally selecting a set number of binarized background ground color connected areas with larger areas as candidate areas for white frames from the obtained binarized background ground color connected areas. In this embodiment, a Difference of Gaussian (DoG) filter operator may be used to perform binarization processing.
In addition, when the image is a color image, before the image of the region to be selected is subjected to binarization processing, the image of the region to be selected is subjected to grayscale processing, and then the image subjected to grayscale processing is subjected to binarization processing.
In this way, after the binarization processing is performed on a color image or a grayscale image, the background color of a white frame in the image is white or black, that is, in the embodiment of the present invention, when the background color of the image is bright, the corresponding binarized background color is white after the binarization processing, and when the background color of the image is dark, the corresponding binarized background color is black after the binarization processing. Moreover, the background color of the white frame, the color of the white characters and the color of the frame of the white frame are consistent and uniform.
In this embodiment of the present invention, determining the candidate region for the white frame may further include: selecting an area from an image as a candidate area, performing binarization processing on the image of the candidate area, performing morphological closing operation on a non-background color area in the candidate area, performing background color connected area detection to obtain one or more binarized background color connected areas, and finally selecting a set number of binarized background color connected areas with larger areas from the obtained binarized background color connected areas as candidate areas for white frames. Here, the morphological closing operation may be performed 1 time with a square operator of 5 × 5, i.e., expansion followed by erosion, so that the connection may be broken to a small extent with respect to the white frame border, thereby preparing for connected domain detection. That is, in the embodiment of the present invention, after selecting an area from an image as a to-be-selected area and performing binarization processing on the image of the to-be-selected area, the method further includes: and when the background color is a bright color, performing morphological closing operation on the black area in the area to be selected if the corresponding binarized background color is white, and when the background color is a dark color, performing morphological closing operation on the white area in the area to be selected if the corresponding binarized background color is black.
The selecting a region from the image as the region to be selected may be manually selecting a region from the image as the region to be selected, or directly according to experience, using a certain portion of the image as the region to be selected, for example, an upper right portion of the image. Thus, the candidate area may be the entire image or may be a portion of the image.
After the through-domain detection is performed on the region to be selected, when a plurality of binarized background color connected regions are obtained, a set number of binarized background color connected regions with a larger area can be selected from the background color connected regions as the dialogue frame candidate regions. When only one binarized background color connected region is obtained, the binarized background color connected region is directly used as a dialogue frame candidate region.
Selecting a set number of binarized background color communicating areas from the obtained binarized background color communicating areas as dialogue frame candidate areas, wherein the method comprises the following steps: and comparing the area of each binarized background color connected region, and determining a set number of binarized background color connected regions with larger areas as white frame candidate regions. For example, 20 binarized background color connected regions are obtained, and the first 10 binarized background color connected regions with a large area are used as the dialogue frame candidate regions.
Of course, in the embodiment of the present invention, a set number of binarized background ground color connected regions may be arbitrarily selected from the obtained binarized background ground color connected regions as the dialogue frame candidate regions.
Step 102: and determining the characteristic parameters corresponding to each dialogue frame candidate area according to the characteristic information of each dialogue frame candidate area.
Each dialogue frame candidate area, namely each background color connected area after binarization, has own characteristic information, and the characteristic information comprises: area, center, convexity and concavity, and symmetry. Wherein,
therefore, the characteristic parameter corresponding to each dialogue frame candidate area can be determined by adopting a formula [1] according to the characteristic information of each dialogue frame candidate area and the characteristic information of the area to be selected. The feature parameters are used to represent features of the white frame candidate region in terms of area, center position, and shape.
Wherein, T is a characteristic parameter corresponding to the dialogue frame candidate region, and alpha is the area ratio of the dialogue frame candidate region to the region to be selected; d is the distance between the center of the dialogue frame candidate area and the center of the area to be selected; omega is the convex-concave degree of the candidate region of the dialogue frame; λ is the symmetry of the dialogue frame candidate region; n is1,n2And Δ is a regulatory factor. n is1,n2And d is used for adjusting the weight of alpha and d respectively, and delta is used for processing special cases such as d being 0 or lambda being 0.
Of course, determining the feature parameter corresponding to each candidate region of the dialog box by using the formula [1] is only a specific example, and a feature parameter for indicating the center position of the candidate region of the dialog box and the shape of the candidate region of the dialog box may be obtained by using other calculation methods according to the feature information. For example: in another calculation formula, the characteristic parameter T may be inversely related to the distance d and the concavity Ω.
Step 103: and determining the dialogue frame area from the dialogue frame candidate area according to the characteristic parameter corresponding to each dialogue frame candidate area.
Here, when the feature parameter corresponding to the pair of white frame candidate regions satisfies the set condition, the pair of white frame candidate regions is determined to be the pair of white frame regions.
If the characteristic parameter T is obtained according to the formula [1], the smaller the characteristic parameter T is, the shorter the distance between the center of the dialogue frame candidate region corresponding to the characteristic parameter T and the center of the region to be selected is, and the full symmetry of the dialogue frame candidate region is good, so that the dialogue frame candidate region corresponding to the minimum characteristic parameter is determined as the dialogue frame region.
If the characteristic parameters T are in inverse proportion to d and omega, the larger the characteristic parameter T is, the shorter the distance between the center of the dialogue frame candidate region corresponding to the characteristic parameter T and the center of the region to be selected is, and the full symmetry of the shape of the dialogue frame candidate region is good, so that the dialogue frame candidate region corresponding to the maximum characteristic parameter is determined as the dialogue frame region.
If the morphological closing operation is performed before the background color connected component detection is performed in step 101, that is, if the white frame candidate region is obtained after the morphological closing operation is performed, the white frame region determined by the method in this step may lose some sharp corner regions, and therefore, each determined white frame region needs to be corrected, which includes:
detecting background ground color connected regions according to values corresponding to all pixel points in the region to be selected before morphological closing operation is carried out, and acquiring initial background ground color connected regions corresponding to the pair of white frame regions; then, comparing the initial background ground color connected region with the corresponding dialogue frame region to obtain at least one connected region, determining a first threshold according to the area of the dialogue frame region, and comparing the area of each connected region with the first threshold and a preset second threshold; and when the area of the connected domain is smaller than or equal to the first threshold and larger than or equal to the second threshold, determining that the connected domain is the sharp corner region of the lost dialogue frame, and supplementing the sharp corner region to the dialogue frame region to obtain the corrected dialogue frame region.
In the embodiment of the present invention, the first threshold is λ × S, where λ is a weight value, S is an area of a white frame region, and the preset second threshold is l. And when l is less than or equal to A and less than or equal to lambda multiplied by S, determining the connected domain corresponding to A as the sharp corner area of the lost opposite white frame, wherein A is the area of the connected domain. Typically l may be 10, or other empirical value.
Step 104: and gradually expanding the determined boundary of the pair of white frame areas, and determining the frame boundary of the pair of white frame areas.
After the white frame area is determined, only one background color connected area after binarization is still available, and no frame exists, so that the frame of the white frame area needs to be determined.
Here, the number of times of the boundary expansion for the white frame region may be set, for example: and 2, determining the newly added expanded pixel points as the border of the pair of white frame areas.
Or, gradually expanding the boundary of the white frame region, and determining the frame boundary of the white frame region according to the gray value of the newly added pixel point after each expansion comprises:
expanding the boundary of the white frame region for a set number of times, and determining a gray threshold corresponding to the white frame region according to gray values of all newly added pixel points after the expansion is performed for the set number of times; and then, determining the border of the white frame area according to the border proportion after the border of the white frame area is expanded every time, wherein when the background color is bright, the border proportion is the proportion of pixels of which the gray values are smaller than the gray threshold value corresponding to the white frame area in the newly increased pixels after the expansion, and when the background color is dark, the border proportion is the proportion of pixels of which the gray values are larger than the gray threshold value corresponding to the white frame area in the newly increased pixels after the expansion. In this embodiment, a square expansion operator of 3 × 3 may be used.
Step 105: and taking the determined border of the frame and the area contained by the border of the frame as a complete area of the dialogue frame.
The border boundary of the dialog box region is already determined in step 104, and then the region composed of the dialog box region and its corresponding border boundary is the dialog box complete region. Namely, the dialogue box complete area comprises: white frame area and border.
Step 106: and extracting the dialogue frame from the area corresponding to the whole dialogue frame area in the image.
In the embodiment of the invention, after the determined complete dialogue frame area in the image subjected to the binarization processing, the corresponding position of the image is extracted to obtain the dialogue frame, that is, the area corresponding to the complete dialogue frame area in the image is extracted to obtain the dialogue frame, so that the information required by the embodiment of the invention is obtained.
In step 104, it is necessary to determine the gray threshold corresponding to the white frame region, in this embodiment of the present invention, determining the gray threshold for the white frame region includes:
and determining all newly added pixel points after the expansion setting times as an estimated region, counting a gray level histogram of the estimated region, and taking the obtained OTSU threshold as a gray level threshold. Namely, according to the gray values of all newly added pixel points expanded by the set times, the gray histogram of the pre-estimated area is counted, and the obtained OTSU threshold is used as the gray threshold.
In the embodiment of the invention, the maximum width of the frame is preset; thus, the boundary of the white frame region is expanded by W according to the preset maximum widthBOne pixel as the largest boundary of the frame, i.e. expanded by WBSecondly; counting the gray histogram of the band-shaped area between the boundary of the white frame area and the maximum boundary of the frame, and calculating to obtain an OTSU threshold as the gray threshold T of the white frame boundaryB. Wherein the maximum width of the frame can be set according to the characteristics of the image to be extracted, for example, WBSet to 10 pixels.
Determining a gray threshold T for a white frame boundaryBThen, the determining the border boundary of the white frame region may specifically include:
in the process of gradually expanding the boundary of the white frame region, the gray value in the newly added pixel points after each expansion is smaller than the gray threshold T of the white frame boundaryBThe proportion of the pixel points, namely the frame proportion, determines the position of the outer boundary of the white frame.
In the embodiment of the invention, the frame proportion is calculated according to a formula [2] every time the expansion is carried out.
Wherein N isiThe number of the newly increased pixel points in the ith expansion is represented, when the background color is bright,represent these NiMiddle gray value less than TBWhen the background color is dark,represent these NiMiddle gray value greater than TBThe number of pixel points, i.e.Is the number of pixel points on the border of the frame, ρiI.e. representing the frame scale.
Comparing the frame proportion after each expansion, judging whether the frame proportion after the current expansion is smaller than a set threshold value, if so, determining the newly added pixel points after the current expansion as the frame boundary of the pair of white frame areas; otherwise, judging whether the expansion is the last expansion in the set times, if so, determining the newly added pixel points after the expansion as the frame boundary of the pair of white frame areas; if not, continuously determining the border boundary of the pair of white border areas according to the border proportion after the next expansion.
ρ as the number of expansions i increasesiWill gradually decrease, from the initial expansion to the border of the frameiDecrease fastest, ρ when crossing the white box outer boundaryiThe drop tends to be slow. Thus selecting a suitable threshold value TρWhen rhoi<TρTime, rhoiWith p after each expansioniThe difference between them is close to zero, when piThe decrease tends to be slow, then ρ can be determinedi<TρAnd determining the newly added pixel point as the border of the pair of white frame areas when the expansion times are i times.
In general, ρiThe trend graph of the expansion times i is shown in FIG. 2 when p isiThe falling speed of the crystal begins to slow when the temperature is less than 0.6, and therefore, the threshold value T is setρMay be 0.6. At this time, the corresponding i is 6, and ρ6And ρ7,ρ8And ρ9If the difference values are close to zero, the border is determined to be reached after 6 times of expansion, and the newly added pixel point at the 6 th time of expansion is determined as the border of the pair of white frame areas. Of course, the set threshold may be other empirical values.
In addition, during the expansion, ρiThe following may occur under the influence of noise: (1) when there is a large amount of random noise outside the bounding box of the white box region, ρiOscillations may occur; (2) when a large-area non-background color area exists outside the frame of the white frame area, rhoiMay be increased. Therefore, there may be a case where the frame ratio is not smaller than the set threshold, that is, no ρi<TρAnd at this time, judging whether the expansion is the last expansion in the set times, and if so, determining that the newly added pixel points after the expansion are the frame boundaries of the pair of white frame areas.
In the above embodiment, the dialogue frame is extracted from the image, and the dialogue frame includes the dialogue text, in the embodiment of the present invention, the dialogue text may also be extracted from the image, or the dialogue frame that does not include the dialogue text, with reference to fig. 3, the method includes:
step 301: and performing background ground color connected domain detection on the to-be-selected region in the image, and determining at least one background ground color connected region as a dialogue frame candidate region.
Step 302: and determining the characteristic parameters corresponding to each dialogue frame candidate area according to the characteristic information of each dialogue frame candidate area.
Step 303: and determining the dialogue frame area from the dialogue frame candidate area according to the characteristic parameter corresponding to each dialogue frame candidate area.
Step 304: and gradually expanding the determined boundary of the white frame area, and determining the border boundary of the white frame area.
Step 305: and taking the determined border of the frame and the area contained by the border of the frame as a complete area of the dialogue frame. The dialogue box complete area includes: white frame area and border.
In the embodiment of the present invention, the specific processes of steps 301 to 305 are the same as the specific processes of step 101 to step 105 in the above embodiment, and thus, the description is not repeated.
Step 306: and carrying out gray segmentation on the complete area of the dialogue frame, and extracting the dialogue characters in the dialogue frame.
Performing gray segmentation on the complete area of the dialog box by using an OTSU threshold, and determining the gray value range of characters in the complete area of the dialog box; and determining the positions of the pixel points meeting the corresponding gray value range in the complete area of the pair of white frames, and extracting the pixel points at the corresponding positions in the image as the dialogue characters. That is, after determining the pixel points of the opposite white characters in the binarized image, the corresponding positions in the image need to be extracted.
Thus, the embodiment of the invention realizes the extraction of the dialogue characters in the image. After the dialogue characters are extracted, the area corresponding to the dialogue characters in the image can be subjected to color filling, and the filled color is the average color value of other areas in the area corresponding to the whole area of the dialogue frame in the image. And then extracting the white frame after the color filling.
The embodiments of the present invention will be described in further detail with reference to the drawings attached hereto.
In the first embodiment, the image in the present embodiment is a cartoon image, and in a general cartoon image, there is a large difference in gray level between a background of a white frame and a border of the white frame, and between the background of the white frame and a contrast character; moreover, the color of the dialogue characters, the background color of the dialogue frame and the color of the frame of the dialogue frame are consistent and uniform. Here, as shown in fig. 4, if the background color of the white frame is bright and the border of the white frame is dark, the background color after binarization is also white, and the process of extracting the dialogue frame and the dialogue characters from the cartoon image is as shown in fig. 5, and includes:
step 501: and performing white connected domain detection on the area to be selected in the cartoon to determine two opposite white frame candidate areas.
Here, the candidate area is subjected to white connected component detection as in the area 410 shown in fig. 4, and two binarized white frame candidate areas at positions corresponding to the area 411 and the area 412 are determined.
Step 502: the feature parameters of the two binarized dialog box candidate regions are calculated according to the formula [1], and since the feature parameters of the dialog box candidate region corresponding to the region 411 are smaller than the feature parameters of the dialog box candidate region corresponding to the region 412, the dialog box candidate region corresponding to the region 411 is determined to be the dialog box region.
If the black area in the area corresponding to the area 410 is expanded when the white connected component is detected, here, the white frame candidate area corresponding to the area 411 needs to be corrected, and the corrected white frame candidate area is determined as the white frame candidate area.
Step 503: the border of the pair of white box areas corresponding to the area 411 is gradually expanded, and the border of the pair of white box areas is determined, wherein the border corresponds to the position of the border 4111 in fig. 4. .
Here, WBSetting 10 pixels, namely expanding the boundary of the area by 10 pixels to be used as the maximum boundary of the frame; counting a gray level histogram of a banded region between the boundary of the white frame region and the maximum boundary of the frame, and calculating to obtain an OTSU threshold as a gray level threshold T of the white frame boundaryB。
Once per expansion according to the formula [2]]Calculated bounding box ratio ρi. Wherein,represents NiMiddle gray value less than TBThe number of the pixel points, and a set threshold value TρWhen p is 0.6iAt < 0.6, ρiWith each p afteriIs close to zero, i.e. when piThe descending speed begins to tend to be slow, so that the pixel point newly added at the 6 th expansion time is determined as the border of the frame.
Of course, if the embodiment of the present invention is applied, ρiThe oscillation condition occurs, and rho cannot occur or does not occuri<TρIn the case of (3), the maximum expansion number, which is 10 th time, is the frame boundary of the pair of white frame regions.
Step 504: the areas corresponding to the border 4111 and the area 411 are grouped into a dialogue box complete area. Step 505 or 506 is performed depending on the extraction purpose. That is, when the frame to be extracted is a dialog box, only step 505 needs to be executed; when the extraction is required to extract the Chinese character, step 506 is required.
Step 505: the pair of white frames is extracted from the image. The border 4111 and the area 411 are extracted from the area corresponding to the complete area of the dialog box in the image. The extraction process ends or continues to step 506.
Step 506: and carrying out gray segmentation on the complete area of the dialogue frame, and extracting dialogue characters in the dialogue frame. The extracted text is shown in fig. 6.
Performing gray segmentation by using an OTSU threshold, and determining a gray value range of characters in a complete white frame area; and then, determining the positions of pixel points meeting the gray value range of characters in the complete area of the white frame, and extracting the pixel points at the corresponding positions in the image as the white characters, namely extracting the characters in the white frame.
In the embodiment of the present invention, in order to retain more character stroke information, the OTSU threshold is adjusted as follows, see formula [3 ]:
T′OTSU=TOTSU+w(255-TOTSU)......................[3]
wherein T isOTSUIs the original threshold, w is the adjustment factor, w is the [0, 1]],T′OTSUIs a regulated threshold value, and T'OTSU>TOTSUSuch an increased threshold may allow more pixels to be divided into text portions, thereby preserving more strokes. The extracted text is shown in fig. 6.
Step 507: and filling the color of the area corresponding to the Chinese character in the graph.
And filling colors in the area corresponding to the dialogue characters in the image, wherein the filled colors are average color values of other areas in the area corresponding to the whole area of the dialogue frame in the image. The dialogue box after color filling is shown in fig. 7.
According to the above method of extracting information from an image, an apparatus for extracting information from an image, referring to fig. 8, may be constructed, including: a candidate area determination unit 100, a dialog box area determination unit 200, a frame boundary determination unit 300, and a dialog box extraction unit 400. Wherein,
and the candidate region determining unit 100 is configured to perform background connected color region detection on the to-be-selected region in the image, and determine at least one background connected color region as a dialogue frame candidate region.
A dialog box region determining unit 200, configured to determine a dialog box region from the dialog box candidate regions according to feature information of each dialog box candidate region.
A border boundary determining unit 300, configured to gradually expand the boundary of the pair of white frame regions, and determine the border boundary of the pair of white frame regions.
And a dialog box extracting unit 400, configured to determine the border of the frame and a region included in the border of the frame as a corresponding dialog box complete region, and extract a region corresponding to the dialog box complete region in the image to obtain a dialog box.
The candidate region determining unit 100 may perform binarization processing on the image of the region to be selected, and obtain a value corresponding to each pixel point in the region to be selected; performing connected domain detection according to the value corresponding to each pixel point to obtain at least one binarized background connected domain; and comparing the area of each binarized background color connected region, and determining a set number of binarized background color connected regions with larger areas as white frame candidate regions. In this way, the candidate region determination unit 100 may determine one or several candidate regions for a white frame.
In addition, the candidate region determining unit 100 may perform grayscale processing on the image of the region to be selected before performing binarization processing on the image of the region to be selected. And the number of the first and second groups,
the candidate region determining unit 100 may further obtain a value corresponding to each pixel point in the to-be-selected region, expand the non-background ground color region in the to-be-selected region, and modify the value corresponding to the corresponding pixel point; and detecting a connected domain according to the modified value corresponding to each pixel point in the to-be-selected area.
Therefore, the candidate region determination unit 100 includes: the device comprises an acquisition subunit, a detection subunit and a determination subunit.
And the obtaining subunit is used for performing binarization processing on the image of the to-be-selected area and obtaining a value corresponding to each pixel point in the to-be-selected area.
And the detection subunit is used for carrying out connected domain detection according to the value corresponding to each pixel point to obtain at least one binarized background connected domain.
And the determining subunit is used for comparing the area of each binarized background color connected region and determining the set number of binarized background color connected regions with larger areas as the candidate regions for the white frame.
The candidate region determining unit 100 further comprises a gray processing subunit, configured to perform gray processing on the image of the region to be selected.
The candidate area determining unit 100 further includes a morphological close operation subunit, configured to perform a morphological close operation on a black area in the to-be-selected area when the background ground color is a bright color, and perform a morphological close operation on a white area in the to-be-selected area when the background ground color is a dark color.
The white frame region determining unit 200 determines the feature parameter of each white frame candidate region according to the feature information of each white frame candidate region, and determines the white frame candidate region as the white frame region when the feature parameter corresponding to the white frame candidate region satisfies the set condition.
Therefore, the pair white frame region determination unit 200 includes: a characteristic parameter determination subunit and a determination subunit.
And the characteristic parameter determining subunit is used for determining the characteristic parameter of each dialogue frame candidate region according to the characteristic information of each dialogue frame candidate region.
And the determining subunit is used for determining the paired white frame candidate regions as the paired white frame regions when the characteristic parameters corresponding to the paired white frame candidate regions meet the set conditions.
The characteristic parameter determination subunit is also used for determining the characteristic parameter according to a formulaDetermining each dialog box candidate region pairCorresponding characteristic parameters, wherein T is the characteristic parameter corresponding to the dialogue frame candidate region, and alpha is the area ratio of the dialogue frame candidate region to the region to be selected; d is the distance between the center of the dialogue frame candidate area and the center of the area to be selected; omega is the convex-concave degree of the candidate region of the dialogue frame; λ is the symmetry of the dialogue frame candidate region; n is1,n2And Δ is a regulatory factor.
The determining subunit is further configured to determine the dialogue frame candidate region corresponding to the minimum feature parameter as the dialogue frame region.
When the candidate area determination unit 100 includes the expansion sub-unit, the white frame area determination unit 200 herein further includes:
and the acquisition subunit is configured to perform background ground color connected domain detection according to a value corresponding to each pixel point in the to-be-selected region before the morphological closing operation is not performed, and acquire an initial background ground color connected domain corresponding to the paired white frame regions.
And the difference subunit is used for performing difference operation on the initial background ground color connected region and the dialogue frame region to obtain at least one connected region.
And the supplementing subunit is used for determining a first threshold according to the area of the opposite white frame region, comparing the area of each connected domain with the first threshold and a preset second threshold, determining that the connected domain is a sharp corner region of a lost opposite white frame when the area of the connected domain is smaller than or equal to the first threshold and is larger than or equal to the second threshold, and supplementing the sharp corner region to the opposite white frame region to obtain the corrected opposite white frame region.
The border boundary determining unit 300 may expand the border of the white border area a set number of times, for example: and 2, determining the newly added expanded pixel points as the border of the pair of white frame areas. Or, gradually expanding the boundary of the white frame area, and determining the frame boundary of the white frame area according to the gray value of the newly added pixel point after each expansion.
Thus, the border boundary determining unit 300 includes: a grayscale threshold subunit and a determination subunit.
And the gray threshold subunit is used for expanding the boundary of the opposite white frame region for a set number of times, and determining the gray threshold corresponding to the opposite white frame region according to the gray values of all newly added pixel points after the expansion of the set number of times.
And the determining subunit is used for determining the border boundary of the opposite white frame region according to the border proportion after the border of the opposite white frame region is expanded every time, wherein when the background color is bright, the border proportion is the proportion of pixels of which the gray values are smaller than the gray threshold corresponding to the opposite white frame region in the newly-increased pixels after the expansion, and when the background color is dark, the border proportion is the proportion of pixels of which the gray values are larger than the gray threshold corresponding to the opposite white frame region in the newly-increased pixels after the expansion.
The gray threshold subunit is further configured to, according to the gray values of all newly added pixel points expanded by the set number of times, count a gray histogram of an estimated region, and use an obtained OTSU threshold as the gray threshold, where the estimated region is composed of all newly added pixel points.
The determining subunit is further configured to determine a formulaObtaining the frame proportion after the boundary of the white frame area is expanded each time, wherein rhoiIs the ratio of the frame, NiThe number of the newly increased pixel points in the ith expansion is represented, when the background color is bright,represents NiThe number of pixels with gray values smaller than the gray threshold corresponding to the dialogue frame area among the pixels is increased, when the background color is dark,represent these NiAnd if so, determining whether the newly added pixel points after the expansion are the frame boundaries of the white frame region, otherwise, determining whether the expansion is the last expansion in the set times, and if so, determining the newly added pixel points after the expansion are the frame boundaries of the white frame region.
In another embodiment of the present invention, referring to fig. 9, an apparatus for extracting information from an image includes: the candidate region determining unit 100, the dialog box region determining unit 200, the frame boundary determining unit 300, the dialog box extracting unit 400, the gray scale value range determining unit 500, and the dialog text extracting unit 600.
Among them, the candidate area determining unit 100, the frame area determining unit 200, and the frame boundary determining unit 300 are consistent with the above-described functions.
And a white frame extraction unit 400, configured to determine the border of the frame and the area included in the border of the frame as a white frame complete area, and extract an area corresponding to the white frame complete area in the image to obtain a white frame, or not extract the white frame.
After the dialog box is determined, the apparatus of the embodiment of the present invention further needs to extract the dialog text in the dialog box, and therefore, the apparatus includes:
a gray value range determining unit 500, configured to perform gray segmentation on the complete white frame region by using an OTSU threshold, and determine a gray value range of the text in the complete white frame region.
And the dialogue character extraction unit 600 is configured to determine positions of pixel points in the complete area of the dialogue frame, where the pixel points satisfy the corresponding gray value range, and extract the pixel points at the corresponding positions in the image as the dialogue characters in the dialogue frame.
The apparatus may further include:
and the filling unit is used for filling the color of the area corresponding to the dialogue characters in the image, wherein the color is the average color value of other areas in the area corresponding to the whole area of the dialogue frame in the image.
In summary, in the embodiment of the present invention, background ground color connected region detection is performed on a to-be-selected region in an image, at least one background ground color connected region is determined as a dialog candidate region, a dialog region is determined from the dialog candidate region according to feature information of each dialog candidate region, a boundary of the dialog region is gradually expanded, a border boundary of the dialog region is determined, the border boundary and a region included in the border boundary are determined as a dialog complete region, and a region corresponding to the dialog complete region in the image is extracted to obtain a dialog. Therefore, the white frame is automatically detected in the image, and the extraction efficiency, the integrity and the accuracy of the extraction are improved. And after the dialogue frame is determined, the gray segmentation is carried out on the area contained in the dialogue frame, and the dialogue characters in the dialogue frame are extracted, so that the automatic extraction of the dialogue characters is realized, and the method can be widely applied to the image transplanting process.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is intended to include such modifications and variations.
Claims (22)
1. A method of extracting information from an image, comprising:
performing background ground color connected domain detection on the to-be-selected region in the image, and determining at least one background ground color connected region as a dialogue frame candidate region;
determining a dialogue frame area from the dialogue frame candidate area according to the characteristic information of each dialogue frame candidate area;
detecting background ground color connected regions according to values corresponding to all pixel points in the region to be selected before morphological closing operation is not carried out, and acquiring initial background ground color connected regions corresponding to the paired white frame regions; the value corresponding to each pixel point in the region to be selected is obtained by carrying out binarization processing on the image of the region to be selected;
performing difference operation on the initial background color connected region and the dialogue frame region to obtain at least one connected region;
determining a first threshold according to the area of the opposite white frame region, comparing the area of each connected domain with the first threshold and a preset second threshold, when the area of each connected domain is smaller than or equal to the first threshold and larger than or equal to the second threshold, determining that the connected domain is a sharp corner region of a lost opposite white frame, and supplementing the sharp corner region to the opposite white frame region to obtain a corrected opposite white frame region;
gradually expanding the boundary of the corrected dialogue frame area according to the set times of expansion of the boundary of the corrected dialogue frame area, and determining the frame boundary of the corrected dialogue frame area;
and determining the frame boundary and the area contained by the frame boundary as a white frame matching complete area, and extracting the area corresponding to the white frame matching complete area in the image to obtain a white frame.
2. The method of claim 1, wherein performing background color connected component detection on the to-be-selected region in the image, and determining at least one background color connected component as a dialogue frame candidate region comprises:
carrying out binarization processing on the image of the region to be selected to obtain a value corresponding to each pixel point in the region to be selected;
performing connected domain detection according to the value corresponding to each pixel point to obtain at least one binarized background connected domain;
and comparing the area of each binarized background color connected region, and determining a set number of binarized background color connected regions with larger areas as white frame candidate regions.
3. The method as claimed in claim 2, wherein before the binarization processing of the image of the region to be selected, the method further comprises:
and carrying out gray level processing on the image of the to-be-selected area.
4. The method according to claim 2 or 3, wherein after obtaining the value corresponding to each pixel point in the to-be-selected region, before performing connected component detection, the method further comprises:
and when the background color is a bright color, performing morphological closing operation on a black area in the area to be selected, and when the background color is a dark color, performing morphological closing operation on a white area in the area to be selected.
5. The method of claim 1, wherein the determining the dialog box region from the dialog box candidate regions according to the feature information of each dialog box candidate region comprises:
determining the characteristic parameters of each dialogue frame candidate area according to the characteristic information of each dialogue frame candidate area;
and when the characteristic parameters corresponding to the dialogue frame candidate area meet the set conditions, determining the dialogue frame candidate area as the dialogue frame area.
6. The method of claim 5, wherein the determining the feature parameters of each dialog box candidate region comprises:
according to the formulaDetermining a characteristic parameter corresponding to each dialogue frame candidate region, wherein T is the characteristic parameter corresponding to the dialogue frame candidate region, and alpha is the area ratio of the dialogue frame candidate region to the region to be selected; d is the distance between the center of the dialogue frame candidate area and the center of the area to be selected; omega is the convex-concave degree of the candidate region of the dialogue frame; λ isSymmetry degree of the candidate area of the white frame; n is1,n2And Δ is a regulatory factor;
then, when the feature parameter corresponding to the dialog box candidate region satisfies the set condition, determining that the dialog box candidate region is the dialog box region includes:
and determining the dialogue frame candidate area corresponding to the minimum characteristic parameter as the dialogue frame area.
7. The method of claim 1, wherein determining the bounding box boundary for the modified Pai-white box region comprises:
expanding the corrected boundary of the dialog box area for a set number of times;
determining a gray threshold corresponding to the corrected white frame region according to gray values of all newly added pixel points expanded by the set times;
and determining the border boundary of the corrected dialogue frame area according to the border proportion after the border of the dialogue frame area is expanded every time, wherein when the background color is bright, the border proportion is the proportion of pixels of which the gray values are smaller than the gray threshold value corresponding to the corrected dialogue frame area in the newly-increased pixels after expansion, and when the background color is dark, the border proportion is the proportion of pixels of which the gray values are larger than the gray threshold value corresponding to the dialogue frame area in the newly-increased pixels after expansion.
8. The method of claim 7, wherein the determining the modified grayscale threshold corresponding to the white frame region comprises:
and according to the gray values of all newly added pixel points expanded by the set times, counting a gray histogram of an estimated region, and taking the obtained OTSU threshold as the gray threshold, wherein the estimated region is composed of all newly added pixel points.
9. The method of claim 7, wherein the determining the bounding box boundary of the modified dialog box region comprises:
according to the formulaObtaining the frame proportion after the boundary of the white frame area is expanded each time, wherein rhoiIs the ratio of the frame, NiThe number of the newly increased pixel points in the ith expansion is represented, when the background color is bright,represents NiThe number of the pixel points with the gray value smaller than the gray threshold value corresponding to the corrected dialogue frame area in each pixel point is determined, when the background color is dark,represent these NiThe number of pixels of which the gray value is greater than the corrected gray threshold corresponding to the white frame region in each pixel;
and judging whether the frame proportion after the expansion is smaller than a set threshold value or not, if so, determining that the newly added pixel points after the expansion are the frame boundary of the corrected opposite white frame area, otherwise, determining whether the expansion is the last expansion in the set times or not, and if so, determining that the newly added pixel points after the expansion are the frame boundary of the corrected opposite white frame area.
10. The method of claim 1, wherein after determining the border and the area included in the border as a complete area of the white frame, further comprising:
carrying out gray segmentation on the complete white frame area by using an OTSU threshold, and determining the gray value range of characters in the complete white frame area;
and determining the positions of the pixel points meeting the corresponding gray value range in the complete white frame matching region, and extracting the pixel points at the corresponding positions in the image as the matching white characters.
11. The method of claim 10, wherein after extracting the pixel points at the corresponding positions in the image as the spoken text, further comprising:
and filling the color of the area corresponding to the dialogue characters in the image, wherein the color is the average color value of other areas in the area corresponding to the whole area of the dialogue frame in the image.
12. An apparatus for extracting information from an image, comprising:
a candidate region determining unit, configured to perform background ground color connected region detection on a region to be selected in the image, and determine at least one background ground color connected region as a dialogue frame candidate region; and a process for the preparation of a coating,
the candidate area determining unit comprises an obtaining subunit, which is used for carrying out binarization processing on the image of the area to be selected and obtaining the value corresponding to each pixel point in the area to be selected;
the dialogue frame area determining unit is used for determining dialogue frame areas from the dialogue frame candidate areas according to the characteristic information of each dialogue frame candidate area;
the white frame region determining unit further comprises an acquiring subunit, a difference subunit and a supplementing subunit:
the acquisition subunit is configured to perform background ground color connected domain detection according to a value corresponding to each pixel point in the to-be-selected region before morphological closing operation is not performed, and acquire an initial background ground color connected domain corresponding to the dialog box region;
the difference subunit is configured to perform difference operation on the initial background ground color connected region and the dialogue frame region to obtain at least one connected region;
the supplementing subunit is configured to determine a first threshold according to the area of the opposite white frame region, compare the area of each connected domain with the first threshold and a preset second threshold, determine that the connected domain is a sharp corner region of a lost opposite white frame when the area of the connected domain is smaller than or equal to the first threshold and is greater than or equal to the second threshold, and supplement the sharp corner region to the opposite white frame region to obtain a corrected opposite white frame region;
a frame boundary determining unit, configured to gradually expand the boundary of the corrected dialog box region according to the number of times set for expanding the boundary of the corrected dialog box region, and determine the frame boundary of the corrected dialog box region;
and the dialogue frame extraction unit is used for determining the frame boundary and the area contained by the frame boundary as a dialogue frame complete area, and extracting the area corresponding to the dialogue frame complete area in the image to obtain a dialogue frame.
13. The apparatus of claim 12, wherein the candidate region determining unit comprises:
the detection subunit is used for carrying out connected domain detection according to the value corresponding to each pixel point to obtain at least one binarized background color connected domain;
and the determining subunit is used for comparing the area of each binarized background color connected region and determining the set number of binarized background color connected regions with larger areas as the candidate regions for the white frame.
14. The apparatus of claim 13, wherein the candidate region determining unit further comprises:
and the gray processing subunit is used for carrying out gray processing on the image of the to-be-selected area.
15. The apparatus of claim 13 or 14, wherein the candidate region determining unit further comprises:
and the morphological closing operation subunit is used for performing morphological closing operation on the black area in the to-be-selected area when the background color is a bright color, and performing morphological closing operation on the white area in the to-be-selected area when the background color is a dark color.
16. The apparatus of claim 12, wherein the dialog box region determination unit comprises:
the characteristic parameter determining subunit is used for determining the characteristic parameter of each dialogue frame candidate region according to the characteristic information of each dialogue frame candidate region;
and the determining subunit is used for determining the paired white frame candidate regions as the paired white frame regions when the characteristic parameters corresponding to the paired white frame candidate regions meet the set conditions.
17. The apparatus of claim 16,
the characteristic parameter determination subunit is further used for determining the characteristic parameter according to a formulaDetermining a characteristic parameter corresponding to each dialogue frame candidate region, wherein T is the characteristic parameter corresponding to the dialogue frame candidate region, and alpha is the area ratio of the dialogue frame candidate region to the region to be selected; d is the distance between the center of the dialogue frame candidate area and the center of the area to be selected; omega is the convex-concave degree of the candidate region of the dialogue frame; λ is the symmetry of the dialogue frame candidate region; n is1,n2And Δ is a regulatory factor;
the determining subunit is further configured to determine the dialogue frame candidate region corresponding to the minimum feature parameter as the dialogue frame region.
18. The apparatus of claim 12, wherein the border boundary determining unit comprises:
the gray threshold subunit is configured to expand the boundary of the modified dialog box region for a set number of times, and determine a gray threshold corresponding to the modified dialog box region according to gray values of all newly added pixel points expanded for the set number of times;
and the determining subunit is configured to determine the frame boundary of the corrected dialog box region according to the frame proportion after the boundary of the dialog box region is expanded every time, where when the background ground color is a bright color, the frame proportion is a proportion of pixels, of which the gray values are smaller than the gray threshold corresponding to the corrected dialog box region, of the newly added expanded pixels, and when the background ground color is a dark color, the frame proportion is a proportion of pixels, of which the gray values are larger than the gray threshold corresponding to the corrected dialog box region, of the newly added expanded pixels.
19. The apparatus of claim 18,
and the gray threshold subunit is further configured to, according to the gray values of all newly added pixels expanded by the set number of times, count a gray histogram of an estimated region, and use an obtained OTSU threshold as the gray threshold, where the estimated region is composed of all newly added pixels.
20. The apparatus of claim 18,
the determining subunit is further configured to determine a formulaObtaining the frame proportion after the boundary of the white frame area is expanded each time, wherein rhoiIs the ratio of the frame, NiThe number of the newly increased pixel points in the ith expansion is represented, when the background color is bright,represents NiThe number of the pixel points with the gray value smaller than the gray threshold value corresponding to the corrected dialogue frame area in each pixel point is determined, when the background color is dark,represent these NiThe number of pixel points with gray values larger than the gray threshold corresponding to the corrected white frame area in each pixel point is judged, whether the frame proportion after the expansion is smaller than the set threshold or not is judged, and if yes, the newly added pixel points after the expansion are determined to be the corrected pixelsOtherwise, determining whether the expansion is the last expansion in the set times, and if so, determining the newly added pixel points after the expansion to be the frame boundary of the corrected opposite white frame area.
21. The apparatus of claim 12, wherein the apparatus further comprises:
a gray value range determining unit, configured to perform gray segmentation on the complete white frame region by using an OTSU threshold, and determine a gray value range of characters in the complete white frame region;
and the dialogue character extraction unit is used for determining the positions of the pixel points meeting the corresponding gray value range in the complete dialogue frame area, and extracting the pixel points at the corresponding positions in the image as dialogue characters.
22. The apparatus of claim 20, further comprising:
and the filling unit is used for filling the color of the area corresponding to the dialogue characters in the image, wherein the color is the average color value of other areas in the area corresponding to the whole area of the dialogue frame in the image.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 201010117062 CN102194118B (en) | 2010-03-02 | 2010-03-02 | Method and device for extracting information from image |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 201010117062 CN102194118B (en) | 2010-03-02 | 2010-03-02 | Method and device for extracting information from image |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN102194118A CN102194118A (en) | 2011-09-21 |
| CN102194118B true CN102194118B (en) | 2013-04-10 |
Family
ID=44602160
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 201010117062 Expired - Fee Related CN102194118B (en) | 2010-03-02 | 2010-03-02 | Method and device for extracting information from image |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102194118B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2016018987A1 (en) * | 2014-07-29 | 2016-02-04 | Alibaba Group Holding Limited | Detecting specified image identifiers on objects |
Families Citing this family (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108961532B (en) * | 2017-05-26 | 2020-11-17 | 深圳怡化电脑股份有限公司 | Method, device and equipment for processing crown word number image and storage medium |
| CN112991308B (en) * | 2021-03-25 | 2023-11-24 | 北京百度网讯科技有限公司 | An image quality determination method, device, electronic equipment and medium |
| TWI787885B (en) * | 2021-06-25 | 2022-12-21 | 凌華科技股份有限公司 | Non-intrusive detection method and device for pop-up window button |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1632821A (en) * | 2004-12-30 | 2005-06-29 | 北京中星微电子有限公司 | Automatic searching and determining method for key words information in name card identification |
| CN101599125A (en) * | 2009-06-11 | 2009-12-09 | 上海交通大学 | A Binarization Method for Image Processing under Complicated Background |
| CN101615252A (en) * | 2008-06-25 | 2009-12-30 | 中国科学院自动化研究所 | An Adaptive Image Text Information Extraction Method |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4960897B2 (en) * | 2008-01-30 | 2012-06-27 | 株式会社リコー | Image processing apparatus, image processing method, program, and storage medium |
-
2010
- 2010-03-02 CN CN 201010117062 patent/CN102194118B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1632821A (en) * | 2004-12-30 | 2005-06-29 | 北京中星微电子有限公司 | Automatic searching and determining method for key words information in name card identification |
| CN101615252A (en) * | 2008-06-25 | 2009-12-30 | 中国科学院自动化研究所 | An Adaptive Image Text Information Extraction Method |
| CN101599125A (en) * | 2009-06-11 | 2009-12-09 | 上海交通大学 | A Binarization Method for Image Processing under Complicated Background |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2016018987A1 (en) * | 2014-07-29 | 2016-02-04 | Alibaba Group Holding Limited | Detecting specified image identifiers on objects |
| US9799119B2 (en) | 2014-07-29 | 2017-10-24 | Alibaba Group Holding Limited | Detecting specified image identifiers on objects |
| TWI655586B (en) | 2014-07-29 | 2019-04-01 | 香港商阿里巴巴集團服務有限公司 | Method and device for detecting specific identification image in predetermined area |
| US10885644B2 (en) | 2014-07-29 | 2021-01-05 | Banma Zhixing Network (Hongkong) Co., Limited | Detecting specified image identifiers on objects |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102194118A (en) | 2011-09-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN105447851B (en) | The sound hole defect inspection method and system of a kind of glass panel | |
| CN107301408B (en) | Human body mask extraction method and device | |
| US8724917B2 (en) | Selecting best image among sequentially captured images | |
| US20210166015A1 (en) | Certificate image extraction method and terminal device | |
| CN111310558A (en) | An intelligent extraction method of pavement diseases based on deep learning and image processing | |
| CN115147409B (en) | Mobile phone shell production quality detection method based on machine vision | |
| CN102750535B (en) | Method and system for automatically extracting image foreground | |
| CN107481238A (en) | Image quality measure method and device | |
| CN104866849A (en) | Food nutrition label identification method based on mobile terminal | |
| CN111768392A (en) | Target detection method and device, electronic equipment and storage medium | |
| CN102194118B (en) | Method and device for extracting information from image | |
| CN102289668A (en) | Binaryzation processing method of self-adaption word image based on pixel neighborhood feature | |
| CN111626249B (en) | Method and device for identifying geometric figure in topic image and computer storage medium | |
| CN104978578A (en) | Text image quality assessment method for mobile phone camera | |
| CN104008368A (en) | Fire recognition method based on maximum entropy threshold segmentation | |
| CN109859257B (en) | Skin image texture evaluation method and system based on texture directionality | |
| CN111126383A (en) | License plate detection method, system, device and storage medium | |
| CN105701491A (en) | Method for making fixed-format document image template and application thereof | |
| CN109389116B (en) | Character detection method and device | |
| CN114926441A (en) | Defect detection method and system for machining and molding injection molding part | |
| CN107545557A (en) | Egg detecting method and device in excrement image | |
| Triantoro et al. | Image based water gauge reading developed with ANN Kohonen | |
| CN116883401B (en) | Industrial product production quality detection system | |
| US20170011275A1 (en) | Nearsighted camera object detection | |
| CN109117843B (en) | Character occlusion detection method and device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20130410 Termination date: 20150302 |
|
| EXPY | Termination of patent right or utility model |