Specific embodiment
Explanation is completes the relatively good implementation invented below, its object is to describe essence spirit of the invention, but simultaneously
Not to limit the present invention.Actual summary of the invention must refer to after scope of the claims.
It will be appreciated that using words such as "comprising", " comprising " in this manual, to indicate that there are specific skills
Art feature, numerical value, method and step, operation processing, element and/or component, but be not precluded can plus more technical characteristics,
Numerical value, method and step, operation processing, element, component or above any combination.
The element for being used to modify in claim using such as " first ", " second ", " third " word in claim, and
It is non-be used to indicate between have priority order, precedence relation or an element prior to another element, or execute
Chronological order when method and step is only used to distinguish the element with same name.
Fig. 1 is the system architecture diagram of arithmetic unit according to an embodiment of the present invention.This system architecture may be implemented in desktop
Computer, notebook computer, tablet computer, mobile phone, digital camera, digital video recorder etc. include at least multiple camera moulds
Group 133_1 to 133_m and multiple camera module controller 131_1 to 131_m, m are the integer more than or equal to 2.Camera mould group
Any of 133_1 to 133_m may include Image Sensor, complementary metal oxide semiconductor (complementary
Metal-oxide-semiconductor, CMOS), the sensing such as charge coupled cell (charge-coupled device, CCD)
Device is formed by image by red, green, blue luminous intensity to sense, and comprising reading electronic circuit, to from Image Sensor
Collect the data sensed.Each of camera mould group 133_1 to 133_m also include tripper and focus regulating mechanism,
And control multiple motors (motors) of these mechanisms.Corresponding camera module controller is according to the ginseng such as aperture and time for exposure
Number controls the motor of tripper and according to the state modulators focus regulating mechanism such as focal length, and will sense in Image Sensor
To an image be sent to image processor (ISP, Image Signal Processor) 120.Touch-control sensing unit
170 may include touch panel, and user can manufacture gesture on touch panel, gesture may include click, double-click, singly refer to towing,
Refer to towing etc., but not limited thereto more.Various ways can be used to implement for processing unit 110, such as with special hardware circuit or lead to
With hardware (for example, single-processor, having the multiprocessor of parallel processing ability, graphics processor or other tool operational capabilities
Processor), and when executing firmware or software, described function after providing.Processing unit 110 can indicate video signal
Processor (ISP, Image Signal Processor) 120 drives camera module controller 131_1 to 131_m to control phase respectively
Machine mould group 133_1 to 133_m captures multiple images.For example, phase of the image processor 120 according to camera mould group 133_1
Machine control parameter drives camera module controller 131_1, so that camera module controller 131_1 can control camera mould group 133_1
To obtain an image.
For each of camera mould group 133_1 to 133_m, camera control parameter may include exposure value, focal length, colour temperature
Value etc..Under predetermined condition, image processor 120 using auto-focusing (AF, Autofocus), automatic exposure (AE,
Auto Exposure) and automatic white balance (AWB, Auto White Balance) technology calculate basis camera control parameter.
However, user, which can touch touch-control sensing unit 170 by operation, adjusts camera control parameter.Fig. 2 is to implement according to the present invention
The method flow diagram of the adjustment camera control parameter of example.The method is executed by processing unit 110.The method executes one repeatedly and follows
Ring (loop), to the camera control parameter that adjusts camera mould group 133_1 to any of 133_m, (step S210 is extremely
S250).In every bout, processing unit 110 determines a gesture according to from the received data of touch-control sensing unit 170
(gesture) (step S210) adjusts camera control parameter (step S230) according to gesture notice image processor 120,
And judge whether touch-control sensing unit 170 continues one section of preset time and do not detect finger contact (step S250).If no
It is (path of "No" in step S250) that the gesture that processing unit 110 continues second leg detects (step S210).If (step
The path of "Yes" in rapid S250), end loop, and notice image processor 120 is according to newest camera control parameter
Camera module controller is driven, to capture image (step S270).
The detecting of gesture shown in step S230 and S250 and the adjustment of camera control parameter, detailed description are as follows.Fig. 3 is foundation
The gesture of the embodiment of the present invention detects schematic diagram.Touch-control sensing unit 170 includes two part 170_1 and 170_2.Work as processing unit
110, which detect user, moves up gesture in upper half 170_1 or lower half the 170_2 manufacture of touch-control sensing unit 170
When 310_1 or 320_1 (step S230), notice image processor 120 terminates automatic exposure algorithm and increases exposure value
(step S250).When processing unit 110 detects user in the upper half 170_1 of touch-control sensing unit 170 or lower half
When 170_2 manufacture moves down gesture 310_5 or 320_5 (step S230), notice image processor 120 terminates to expose automatically
Light algorithm and reduction exposure value (step S250).It is familiar with this those skilled in the art understanding, exposure value can influence to control tripper
Aperture and time for exposure.When processing unit 110 detect user in the upper half 170_1 of touch-control sensing unit 170 manufacture to
When moving left gesture 310_7 (step S230), notice image processor 120 terminates automatic white balance algorithm and increases color
Temperature value (step S250).When processing unit 110 detect user in the upper half 170_1 of touch-control sensing unit 170 manufacture to
When moving right gesture 310_3 (step S230), notice image processor 120 terminates automatic white balance algorithm and reduces color
Temperature value (step S250).When processing unit 110 detect user in the lower half 170_2 of touch-control sensing unit 170 manufacture to
When moving left gesture 320_7 (step S230), notice image processor 120 terminates auto-focusing algorithm and increases focal length
(step S250).It moves right when processing unit 110 detects user in the lower half 170_2 manufacture of touch-control sensing unit 170
When gesture of starting 320_3 (step S230), notice image processor 120 terminates auto-focusing algorithm and reduces focal length (step
Rapid S250).It is familiar with this those skilled in the art understanding, focal length can be used to control focus regulating mechanism.
When processing unit 110 detects user in the upper half 170_1 manufacture upper movement to the left of touch-control sensing unit 170
When gesture 310_8 (step S230), notice image processor 120 terminates automatic exposure algorithm and automatic white balance calculation
Method and increase exposure value and color temperature value (step S250).When processing unit 110 detects user in touch-control sensing unit
170 upper half 170_1 manufacture moves up to the right when starting gesture 310_2 (step S230), and notice image processor 120 terminates
Automatic exposure algorithm and automatic white balance algorithm and increase exposure value and reduction color temperature value (step S250).Work as processing
Unit 110 detects user and moves gesture 310_6 (step to left down in the upper half 170_1 manufacture of touch-control sensing unit 170
When S230), notice image processor 120 terminates automatic exposure algorithm and automatic white balance algorithm and reduces to expose
Light value and increase color temperature value (step S250).When processing unit 110 detects user in the upper half of touch-control sensing unit 170
170_1 manufacture moves down to the right when starting gesture 310_4 (step S230), and notice image processor 120, which terminates automatic exposure, drills
Algorithm and automatic white balance algorithm and reduction exposure value and color temperature value (step S250).
When processing unit 110 detects user in the lower half 170_2 manufacture upper movement to the left of touch-control sensing unit 170
When gesture 320_8 (step S230), notice image processor 120 terminates automatic exposure algorithm and auto-focusing calculation
Method and increase exposure value and focal length (step S250).When processing unit 110 detects user in touch-control sensing unit 170
Lower half 170_2 manufacture move up to the right when starting gesture 320_2 (step S230), notice image processor 120 terminate from
Dynamic exposure algorithm and auto-focusing algorithm and increase exposure value and reduction focal length (step S250).When processing unit 110
Gesture 320_6 (step S230) is moved in the lower half 170_2 manufacture that user is detected in touch-control sensing unit 170 to left down
When, notice image processor 120 terminates automatic exposure algorithm and auto-focusing algorithm and reduces exposure value and increasing
Add focal length (step S250).It is manufactured when processing unit 110 detects user in the lower half 170_2 of touch-control sensing unit 170
It moves down to the right when starting gesture 320_4 (step S230), notice image processor 120 terminates automatic exposure algorithm and automatic
Algorithm of focusing and reduction exposure value and focal length (step S250).
Fig. 4 is the interaction schematic diagram between processing unit 110 and image processor 120 according to an embodiment of the present invention.
Processing unit 110 executes gesture and detects mould group 410, the correlation of frame (framework) 430 and driver (driver) 450
Software instruction or procedure code, to execute specific function.Image processor 120 after having executed auto-focusing algorithm,
The motor stop position " Final peek step " of focus regulating mechanism is passed into frame 430 by driver 450, and
Frame 430 stores motor stop position in volatile storage (RAM) 140.Gesture detect mould group 410 in continue one section it is default when
Between do not detect finger contact after (path of "Yes" in step S250), according to the Focussing volatile storage increased or decreased
Motor stop position in device 140, and motor stop position adjusted is notified into image processor 120, to allow
Image processor 120 is controlled in respective camera mould group by any of camera module controller 121_1 to 121_m
Focus regulating mechanism.Image processor 120 after having executed automatic white balance algorithm, by color temperature value (fall within 2300K~
Between 7500K) frame 430 passed to by driver 450, and frame 430 stores color temperature value in volatile storage 140.Hand
Gesture detects mould group 410 after continuing one section of preset time and not detecting finger contact (path of "Yes" in step S250), increases
Or the color temperature value in reduction volatile storage 140, and color temperature value adjusted is notified into image processor 120, it uses
To allow image processor 120 to control respective camera mould by any of camera module controller 121_1 to 121_m
Group.Image processor 120 passes to frame by driver 450 after having executed automatic exposure algorithm, by exposure value
Frame 430, and frame 430 stores exposure value in volatile storage 140.Gesture detects mould group 410 in continuing one section of preset time
(path of "Yes" in step S250) after finger contacts is not detected, the exposure value in volatile storage 140 is increased or decreased,
And exposure value adjusted is notified into image processor 120, to allow image processor 120 to pass through camera mould group
Any of controller 121_1 to 121_m controls the motor of the tripper in respective camera mould group.
Volatile storage 140 configures three block spaces, stores display unit 180_1, video encoding unit (video respectively
Encoding unit) shadow required for 180_2 and static image coding unit (still-image encoding unit) 180_3
As data 143_1 to 143_3.Display unit 180_1, video encoding unit 180_2 and static image coding unit 180_3 can unite
Referred to as image consumable unit (image-consuming units).Any of image data 143_1 to 143_3 may include
At least one image.Display unit 180_1 may include display panel (for example, film liquid crystal display panel, Organic Light Emitting Diode
The panel of panel or other tool display capabilities), to read and show the image 143_1 stored in volatile storage 140.This
Outside, display panel can more show number, symbol, pull mouse motion track or application program provided by picture, be supplied to
User's viewing.Video encoding unit 180_2 read image data 143_2 multiple images, using video compression technology by this
A little frames are encoded into video data, and video data is stored to storage device 160.Video compression technology can for by MPEG-2,
MPEG-4, ITU-T H.263, ITU-T H.264, AVC (Advanced Video Coding, advanced video encoding), HEVC
(High Efficiency Video Coding, efficient video coding) etc. organizes prepared standard and these standards
Expand.Static image coding unit 180_3 reads one or more images of image data 143_3, is compiled using image compression technology
These images of code, and by compressed data storage to storage device 160.Image compression technology can be made to be organized by JPEG etc.
The expansion of fixed standard and these standards.Storage device 160 can be hard disk, CD, solid state hard disk etc., to store video
Data, compression static image etc..Image generates the source that parameter 141 may include image data 143_1 to any of 143_3
Information, the number of for example, at least two source camera mould groups, corresponding to the origin coordinates of each source camera mould group and corresponding
In the destination resolution of each source camera mould group.The example that image generates parameter 141 is as shown in table 1:
Table 1
Image, which generates the information in parameter 141, to be by a user by man-machine interface (MMI, Man during image generates
Machine Interface) it modifies.
In order to be promoted for needed for display unit 180_1, video encoding unit 180_2 and static image coding unit 180_3
The generation efficiency of data is wanted, image processor 120 is produced in the case where not needing the participation of processing unit 110 according to image
Raw parameter 141 directly generates and stores image data 143_1 to 143_3.Specifically, image processor 120 can foundation
The number that image generates the source camera mould group in parameter 141 is received from specified each of the camera mould group 133_1 into 133_m
The raw video sensed, and the resolution adjustment in parameter 141 corresponding to specified source camera mould group is generated according to image
(re-size) size of each raw video.Image processor 120 can carry out color space conversion to raw video
(color conversion), for example, by RGB video conversion be YCbCr or YUV image.Image processor 120 can be used
Known image-zooming algorithm (image scaling algorithm) carries out resolution adjustment.For example, video signal is handled
The raw video amplification (up-scale) that resolution is 640x480 can be the image that resolution is 1024x768 by device 120, or will
The raw video that resolution is 1024x768 reduces the image that (down-scale) is 640x480.Finally, image processor
120 origin coordinates that can be generated in parameter 141 corresponding to each source camera mould group according to image pass through direct memory access (DMA)
Controller (DMA, Direct Memory Access controller) 150 is by image storage adjusted in volatile storage
The specified address of device 140.Fig. 5 is the method flow diagram of the sensing image of processing polyphaser mould group according to an embodiment of the present invention.
The method is executed by image processor 120.The method includes two circulations: interior circulation;And outer circulation.The execution of outer circulation
Frequency is the turnover rate (refresh rate) of image consumable unit.For example, being directed to display unit 180_1 or video encoding unit
180_2, the execution frequency of outer circulation are 40 times per second or higher, and execute the (step until image generation function terminates repeatedly
The path of "No" in rapid S593).User can close preview function by man-machine interface, terminate Video coding function and terminate quiet
State image shooting function, so that not enable display unit 180_1, video encoding unit 180_2 and static image coding are single respectively
The running of first 180_3.During the running that image generates function, image is read for an image consumable unit repeatedly and generates parameter
141 (step S510) then execute interior circulation, to generate final image (step S530 to S591) repeatedly.In every bout
In, image processor 120 generates parameter 141 according to image and obtains raw video (step from a source camera mould group
S530).In step S530, image processor 120 selects a source camera mould according to destination resolution from large to small
Group, and raw video is obtained from the source camera mould group of selection.For example, being display unit 143_1, being returned using two according to table 1
It closes, successively obtains raw video from camera mould group 133_1 and 133_2.Then, image processor 120 is generated according to image
Parameter 141 adjusts raw video (step S550).In step S550, image processor 120 can carry out color to raw video
Color space is converted and/or carries out resolution adjustment using known image-zooming algorithm.Then, image processor
120, which generate parameter 141 according to image, passes through DMA controller 150 for image storage after adjustment to volatile storage
The designated position (step S570) of device 140.Initial address in step S570, from the configuration space of volatile storage 140
Image after write-in adjustment.In addition, in this it is noted that the image being written afterwards to cover (overwritten) unborn
Image.Then, image processor 120 judges whether to have generated final image (step S591), if (in step S591
The path of "Yes"), leave interior circulation;Otherwise (path of "No" in step S591) generates parameter 141 from next according to image
Source camera mould group obtains raw video (step S530).Image consumable unit 180_1 to 180_3 can according to a frequency respectively from
The configuration space 143_1 to 143_3 of volatile storage 140 reads final image.
The image of reference table 1 generates parameter 141, illustrated below how to generate for image consumable unit 180_1 extremely
The final image that 180_3 is used.In first example, Fig. 6 A is making for display unit 180_1 according to an embodiment of the present invention
Final image schematic diagram.Image processor 120 obtains original shadow from camera mould group 133_1 for display unit 180_1
As (step S530), adjustment raw video becomes resolution 640x480 (step S550) and from volatile storage 140
Address Offset_1+ (0,0) start store width be 640 and height be 480 adjustment after image 610_1, wherein
Offset_1 represents the initial address (step S570) of configuration space 143_1.Later, image processor 120 is from camera
Mould group 133_2 obtains raw video (step S530), and adjustment raw video becomes resolution 240x168 (step S550), and
The adjustment that width is 240 and height is 168 is stored since the address Offset_1+ (100,100) in volatile storage 140
Image 610_2 afterwards, wherein image 610_2 can cover the data (step of corresponding position in image 610_1 after adjustment after adjustment
S570).In second example, Fig. 6 B is the final shadow used for video encoding unit 180_2 according to an embodiment of the present invention
As schematic diagram.Image processor 120 obtains raw video (step from camera mould group 133_2 for video encoding unit 180_2
S530), adjustment raw video becomes resolution 1024x768 (step S550), and from the address in volatile storage 140
Offset_2+ (0,0) starts image 620_1 after the adjustment that storage width is 1024 and height is 768, wherein Offset_2 generation
The initial address (step S570) of table configuration space 143_2.Later, image processor 120 is to take from camera mould group 133_m
Raw video (step S530), adjustment raw video become resolution 512x384 (step S550), and from volatile storage
Address Offset_2+ (512,384) in device 140 starts image 620_2 after the adjustment that storage width is 512 and height is 384,
Wherein, image 620_2 can cover the data (step S570) of corresponding position in image 620_1 after adjustment after adjustment.In third
In example, Fig. 6 C is the final image schematic diagram used for static image coding unit 180_3 according to an embodiment of the present invention.Shadow
Picture signal processor 120 obtains raw video (step S530) from camera mould group 133_2 for static image coding unit 180_3,
Adjusting raw video becomes resolution 1024x768 (step S550), and from the address Offset_ in volatile storage 140
3+ (0,0) starts image 630_1 after the adjustment that storage width is 1024 and height is 768, wherein Offset_3 is represented to match and be emptied
Between 143_3 initial address (step S570).Later, image processor 120 is to obtain original shadow from camera mould group 133_1
As (step S530), adjustment raw video becomes resolution 200x100 (step S550), and from volatile storage 140
Address Offset_3+ (0,0) start store width be 200 and height be 100 adjustment after image 630_2, wherein after adjustment
Image 630_2 can cover the data (step S570) of corresponding position in image 630_1 after adjustment.
Fig. 7 is the method flow diagram of the sensing image of processing polyphaser mould group according to an embodiment of the present invention.Assuming that camera
Mould group 13,3_1 one into 133_m is first camera mould group, and another into 133_m of camera mould group 133_1 is second
Camera mould group: image processor 120 from first camera mould group obtain the first image (step S711), adjustment the first image at
For the first resolution (step S713), and by the configuration of the first image storage of the first resolution to volatile storage 140
Space (step S715).Image processor 120 obtains the second image (step S731), adjustment second from second camera mould group
Image becomes the second resolution, and the second resolution is less than the first resolution (step S733).Image processor 120 is by
Second image storage of two resolutions is to the same configuration space of volatile storage 140, and the second image covers the first image
In a part (step S735).For the efficiency of lifting device, processing unit 110 is not involved in the first image and second described above
The generation and storage of image.
Although containing element described above in Fig. 1, be not precluded under the spirit for not violating invention, using it is more its
His add ons have reached more preferably technical effect.Although in addition, the processing step of Fig. 2,5,7 using specific sequence come
It executes, but in the case where not illegal spirit, those skilled in the art can repair under the premise of reaching same effect
Change the sequence between these steps, so, the invention is not limited to sequence as described above is used only.
The foregoing is merely present pre-ferred embodiments, the range that however, it is not to limit the invention is any to be familiar with sheet
The personnel of item technology can do further improvements and changes without departing from the spirit and scope of the present invention on this basis, because
This protection scope of the present invention is when being subject to the range that following claims are defined.