[go: up one dir, main page]

CN105898158B - A kind of data processing method and electronic equipment - Google Patents

A kind of data processing method and electronic equipment Download PDF

Info

Publication number
CN105898158B
CN105898158B CN201610270463.9A CN201610270463A CN105898158B CN 105898158 B CN105898158 B CN 105898158B CN 201610270463 A CN201610270463 A CN 201610270463A CN 105898158 B CN105898158 B CN 105898158B
Authority
CN
China
Prior art keywords
video
video data
data
sub
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610270463.9A
Other languages
Chinese (zh)
Other versions
CN105898158A (en
Inventor
刘林汶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610270463.9A priority Critical patent/CN105898158B/en
Publication of CN105898158A publication Critical patent/CN105898158A/en
Priority to PCT/CN2016/113980 priority patent/WO2017185808A1/en
Application granted granted Critical
Publication of CN105898158B publication Critical patent/CN105898158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)

Abstract

The embodiment of the invention discloses a kind of data processing methods, comprising: electronic equipment acquires at least one video data in the image acquisition region corresponding at least one described image collecting device using at least one image collecting device in real time;At least with the first video processing strategie and the second video processing strategie, at least partly subdata at least one described video data collected in real time is handled, at least two target sub-video datas are obtained;The first video processing strategie is different from the second video processing strategie;Target video data is generated based on at least two targets sub-video data, so as to include the video data that can be presented at least two different presentation modes in the target video data.The embodiment of the present invention also discloses a kind of electronic equipment simultaneously.

Description

Data processing method and electronic equipment
Technical Field
The present invention relates to electronic technologies, and in particular, to a data processing method and an electronic device.
Background
The functions of the existing electronic devices are more and more, and more functions have become standard configurations of the electronic devices, for example, the photographing function of the electronic devices; the user can use the photographing function of the electronic equipment to photograph or record videos; when a user uses a photographing function to record videos, the electronic equipment generally processes collected video data by using the same video processing strategy, and presents the collected video data in the display screen of the electronic equipment in the same presenting mode when presenting, and the method is single, so that the requirement of the user on the diversification of the presenting mode cannot be met, and the user experience is reduced.
Disclosure of Invention
In view of this, in order to solve the existing technical problems, embodiments of the present invention provide a data processing method and an electronic device, which can at least solve the problems existing in the prior art, enrich user experience, and improve user experience at the same time.
The technical scheme of the embodiment of the invention is realized as follows:
a first aspect of an embodiment of the present invention provides a data processing method, including:
the electronic equipment utilizes at least one image acquisition device to acquire at least one piece of video data in real time in an image acquisition area corresponding to the at least one image acquisition device;
processing at least part of sub-data in the at least one video data acquired in real time by using at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data; the first video processing policy is different from the second video processing policy;
and generating target video data based on the at least two target sub-video data so that the target video data comprises video data capable of being presented in at least two different presentation modes.
In the foregoing solution, the processing at least part of sub-data in the at least one video data acquired in real time by using the first video processing policy includes:
reducing video characteristic parameters corresponding to at least part of sub data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
taking at least part of sub-data in the at least one video data after the video characteristic parameters are reduced as first target sub-video data; wherein the first target sub video data is included in the at least two target sub video data; the first target sub-video data is capable of being presented in a first presentation.
In the foregoing solution, the processing at least part of the sub-data in the at least one video data acquired in real time by using the second video processing policy includes:
amplifying video characteristic parameters corresponding to at least part of sub-data in the at least one video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
taking at least part of sub-data in the at least one video data with the video characteristic parameters being increased as second target sub-video data; wherein the second target sub video data is included in the at least two target sub video data; the second target sub-video data is capable of being presented in a second presentation.
In the above scheme, the increasing the video characteristic parameters of at least part of the sub-data in the at least one piece of video data collected in real time includes:
determining video storage characteristic parameters based on the acquisition characteristic parameters; the video storage characteristics represent the number of video frames to be stored in unit time;
and deleting the video frames in the collected video data according to the determined video storage characteristic parameters, and increasing the video characteristic parameters in the video data collected in real time.
In the above scheme, the method further comprises:
storing the target video data;
presenting the at least two target sub-video data in different presentation manners in at least a first display area and a second display area of the electronic device based on user operation.
A second aspect of an embodiment of the present invention provides an electronic device, including:
the image acquisition unit is used for acquiring at least one piece of video data in real time in an image acquisition area corresponding to at least one image acquisition device by utilizing at least one image acquisition device;
the image processing unit is used for processing at least part of sub-data in the at least one video data acquired in real time by utilizing at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data; the first video processing policy is different from the second video processing policy;
and the video data generating unit is used for generating target video data based on the at least two target sub-video data so as to enable the target video data to contain video data which can be presented in at least two different presentation modes.
In the above scheme, the image processing unit is further configured to turn down video characteristic parameters corresponding to at least part of sub-data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
the video processing device is further used for taking at least part of sub-data in the at least one video data after the video characteristic parameters are reduced as first target sub-video data; wherein the first target sub video data is included in the at least two target sub video data; the first target sub-video data is capable of being presented in a first presentation.
In the above scheme, the image processing unit is further configured to increase video characteristic parameters corresponding to at least part of sub-data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
the video feature parameter is adjusted to be at least partial sub-data in the at least one video data as second target sub-video data; wherein the second target sub video data is included in the at least two target sub video data; the second target sub-video data is capable of being presented in a second presentation.
In the above scheme, the image processing unit is further configured to determine a video storage characteristic parameter based on the acquisition characteristic parameter; according to the determined video storage characteristic parameters, deleting the video frames in the collected video data, and increasing the video characteristic parameters in the video data collected in real time; the video storage characteristics represent the number of video frames to be stored in unit time.
In the above scheme, the electronic device further includes a storage unit and a video display unit; wherein,
the storage unit is used for storing the target video data;
the video display unit is used for presenting the at least two target sub-video data in the target video data in different presentation modes in at least a first display area and a second display area of the electronic equipment based on user operation.
According to the data processing method and the electronic device, at least one image acquisition device is used for acquiring at least one piece of video data in real time in an image acquisition area corresponding to the at least one image acquisition device, at least a first video processing strategy and a second video processing strategy are used for processing at least part of sub data in the at least one piece of video data acquired in real time to obtain at least two pieces of target sub video data, and then the target video data are generated based on the at least two pieces of target sub video data, so that the video data contained in the target video data can be presented in different presentation modes, and therefore the method enriches user experience and improves user experience; meanwhile, the requirement of the user on the diversification of the presentation mode is also met.
Drawings
Fig. 1 is a schematic hardware configuration diagram of an alternative mobile terminal 100 for implementing various embodiments of the present invention;
FIG. 2 is a diagram of a wireless communication system for the mobile terminal 100 shown in FIG. 1;
FIG. 3 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present invention;
FIG. 4 is a first diagram illustrating a specific application of the data processing method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a second exemplary application of the data processing method according to the embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
It should be understood that the embodiments described herein are only for explaining the technical solutions of the present invention, and are not intended to limit the scope of the present invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to the accompanying drawings. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware configuration of an alternative mobile terminal 100 for implementing various embodiments of the present invention, and as shown in fig. 1, the mobile terminal 100 may include a wireless communication unit 110, an audio/video (a/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, a power supply unit 190, and the like. Fig. 1 illustrates the mobile terminal 100 having various components, but it is to be understood that not all illustrated components are required to be implemented. More or fewer components may alternatively be implemented. The elements of the mobile terminal 100 will be described in detail below.
The wireless communication unit 110 typically includes one or more components that allow radio communication between the mobile terminal 100 and a wireless communication system or network. For example, the wireless communication unit 110 may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
The broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast channel may include a satellite channel and/or a terrestrial channel. The broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to a terminal. The broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like. Also, the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal. The broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112. The broadcast signal may exist in various forms, for example, it may exist in the form of an Electronic Program Guide (EPG) of Digital Multimedia Broadcasting (DMB), an Electronic Service Guide (ESG) of digital video broadcasting-handheld (DVB-H), and the like. The broadcast receiving module 111 may receive a signal broadcast by using various types of broadcasting systems. In particular, the broadcast receiving module 111 may receive a broadcast signal by using a signal such as multimedia broadcasting-terrestrial (DMB-T), digital multimedia broadcasting-satellite (DMB-S), digital video broadcasting-handheld (DVB-H), forward link media (MediaFLO)@) A digital broadcasting system of a terrestrial digital broadcasting integrated service (ISDB-T), etc. receives digital broadcasting. The broadcast receiving module 111 may be constructed to be suitable for various broadcasting systems that provide broadcast signals as well as the above-mentioned digital broadcasting systems. The broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other types of memoriesStorage media).
The mobile communication module 112 transmits and/or receives radio signals to and/or from at least one of a base station (e.g., access point, node B, etc.), an external terminal, and a server. Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received according to text and/or multimedia messages.
The wireless internet module 113 supports wireless internet access of the mobile terminal 100. The wireless internet module 113 may be internally or externally coupled to the terminal. The wireless internet access technology referred to by the wireless internet module 113 may include Wireless Local Area Network (WLAN), wireless compatibility authentication (Wi-Fi), wireless broadband (Wibro), worldwide interoperability for microwave access (Wimax), High Speed Downlink Packet Access (HSDPA), and the like.
The short-range communication module 114 is a module for supporting short-range communication. Some examples of short-range communication technologies include bluetoothTMRadio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), zigbeeTMAnd so on.
The location information module 115 is a module for checking or acquiring location information of the mobile terminal 100. A typical example of the location information module 115 is a Global Positioning System (GPS) module 115. According to the current technology, the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information, thereby accurately calculating three-dimensional current location information according to longitude, latitude, and altitude. Currently, a method for calculating position and time information uses three satellites and corrects an error of the calculated position and time information by using another satellite. In addition, the GPS module 115 can calculate speed information by continuously calculating current position information in real time.
The a/V input unit 120 is used to receive an audio or video signal. The a/V input unit 120 may include a camera 121 and a microphone 122, and the camera 121 processes image data of still pictures or video obtained by an image capturing apparatus in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 151. The image frames processed by the cameras 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the construction of the mobile terminal 100. The microphone 122 may receive sounds (audio data) via the microphone in a phone call mode, a recording mode, a voice recognition mode, or the like, and can process such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the mobile communication module 112 in case of a phone call mode. The microphone 122 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The user input unit 130 may generate key input data to control various operations of the mobile terminal 100 according to a command input by a user. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. In addition, the sensing unit 140 can detect whether the power supply unit 190 supplies power or whether the interface unit 170 is coupled with an external device.
The interface unit 170 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port (a typical example is a Universal Serial Bus (USB) port), a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The identification module may store various information for authenticating a user using the mobile terminal 100 and may include a User Identity Module (UIM), a Subscriber Identity Module (SIM), a Universal Subscriber Identity Module (USIM), and the like. In addition, a device having an identification module (hereinafter, referred to as an "identification device") may take the form of a smart card, and thus, the identification device may be connected with the mobile terminal 100 via a port or other connection means.
The interface unit 170 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and the external device.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal 100. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal 100 is accurately mounted on the cradle.
The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, mobile terminal 100 may include two or more display units (or other display devices), for example, mobile terminal 100 may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other incoming communication (communicating communication) is received, the alarm unit 153 may provide a tactile output (i.e., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs or the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, etc.) that has been output or is to be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal 100. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing or playing back multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed to be separated from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The power supply unit 190 receives external power or internal power and provides appropriate power required to operate various elements and components under the control of the controller 180.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, the mobile terminal 100 has been described in terms of its functions. Hereinafter, the slide-type mobile terminal 100 among various types of mobile terminals 100, such as a folder-type, bar-type, swing-type, slide-type mobile terminal 100, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal 100, and is not limited to the slide type mobile terminal 100.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
A communication system in which the mobile terminal 100 according to the present invention is capable of operating will now be described with reference to fig. 2.
Such communication systems may use different air interfaces and/or physical layers. For example, the air interface used by the communication system includes, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)), global system for mobile communications (GSM), and the like. By way of non-limiting example, the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
Referring to fig. 2, the CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of Base Stations (BSs) 270, Base Station Controllers (BSCs) 275, and a Mobile Switching Center (MSC) 280. The MSC 280 is configured to interface with a Public Switched Telephone Network (PSTN) 290. The MSC 280 is also configured to interface with a BSC275, which may be coupled to the base station 270 via a backhaul. The backhaul may be constructed according to any of several known interfaces including, for example, E1/T1, ATM, IP, PPP, frame Relay, HDSL, ADSL, or xDSL. It will be understood that a system as shown in fig. 2 may include multiple BSCs 2750.
Each BS 270 may serve one or more sectors (or regions), each sector covered by a multi-directional antenna or an antenna pointing in a particular direction being radially distant from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 may be configured to support multiple frequency allocations, with each frequency allocation having a particular frequency spectrum (e.g., 1.25MHz, 5MHz, etc.).
The intersection of partitions with frequency allocations may be referred to as a CDMA channel. The BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology. In such a case, the term "base station" may be used to generically refer to a single BSC275 and at least one BS 270. The base stations may also be referred to as "cells". Alternatively, each partition of a particular BS 270 may be referred to as a plurality of cell sites.
As shown in fig. 2, a Broadcast Transmitter (BT)295 transmits a broadcast signal to the mobile terminal 100 operating within the system. A broadcast receiving module 111 as shown in fig. 1 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295. In fig. 2, several satellites 300 are shown, for example, Global Positioning System (GPS) satellites 300 may be employed. The satellite 300 assists in locating at least one of the plurality of mobile terminals 100.
In fig. 2, a plurality of satellites 300 are depicted, but it is understood that useful positioning information may be obtained with any number of satellites. The GPS module 115 as shown in fig. 1 is generally configured to cooperate with satellites 300 to obtain desired positioning information. Other techniques that can track the location of the mobile terminal 100 may be used instead of or in addition to GPS tracking techniques. In addition, at least one GPS satellite 300 may selectively or additionally process satellite DMB transmission.
As a typical operation of the wireless communication system, the BS 270 receives reverse link signals from various mobile terminals 100. The mobile terminal 100 is generally engaged in conversations, messaging, and other types of communications. Each reverse link signal received by a particular base station 270 is processed within the particular BS 270. The obtained data is forwarded to the associated BSC 275. The BSC provides call resource allocation and mobility management functions including coordination of soft handoff procedures between BSs 270. The BSCs 275 also route the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290. Similarly, the PSTN290 interfaces with the MSC 280, the MSC interfaces with the BSCs 275, and the BSCs 275 accordingly control the BS 270 to transmit forward link signals to the mobile terminal 100.
The mobile communication module 112 of the wireless communication unit 110 in the mobile terminal accesses the mobile communication network based on the necessary data (including the user identification information and the authentication information) of the mobile communication network (such as the mobile communication network of 2G/3G/4G, etc.) built in the mobile terminal, so as to transmit the mobile communication data (including the uplink mobile communication data and the downlink mobile communication data) for the services of web browsing, network multimedia playing, etc. of the mobile terminal user.
The wireless internet module 113 of the wireless communication unit 110 implements a function of a wireless hotspot by operating a related protocol function of the wireless hotspot, the wireless hotspot supports access by a plurality of mobile terminals (any mobile terminal other than the mobile terminal), transmits mobile communication data (including uplink mobile communication data and downlink mobile communication data) for mobile terminal user's services such as web browsing, network multimedia playing, etc. by multiplexing the mobile communication connection between the mobile communication module 112 and the mobile communication network, since the mobile terminal essentially multiplexes the mobile communication connection between the mobile terminal and the communication network for transmitting mobile communication data, the traffic of mobile communication data consumed by the mobile terminal is charged to the communication tariff of the mobile terminal by a charging entity on the side of the communication network, thereby consuming the data traffic of the mobile communication data included in the communication tariff contracted for use by the mobile terminal.
Based on the above hardware structure of the mobile terminal 100 and the communication system, various embodiments of the method of the present invention are proposed.
Example one
The embodiment of the invention provides a data processing method; specifically, fig. 3 is a schematic flow chart illustrating an implementation of a data processing method according to an embodiment of the present invention; the method is applied to the electronic equipment, and the electronic equipment can be the mobile terminal; the electronic equipment is provided with or linked with a display screen and at least one image acquisition device; as shown in fig. 3, the method includes:
step 301: the electronic equipment utilizes at least one image acquisition device to acquire at least one piece of video data in real time in an image acquisition area corresponding to the at least one image acquisition device;
in an embodiment, the electronic device may utilize an image capturing device, for example, a first camera captures video data in real time in an image capturing area corresponding to the first camera, where only one captured video data is available; further, the electronic device processes the same video data acquired by the first camera by using at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data, and finally generates the target video data based on the at least two target sub-video data, that is, the video data in different presentation modes contained in the target video data are data of the same video source.
Further, when the video data of different presentation manners included in the target video data are data of the same video source, the video data processed by the first video processing policy and the second video processing policy may be all the same video data, may also be partially the same video data, and may also be completely different video data; for example, the video data acquired by the first camera is first video data, and at this time, the electronic device may process all data of the first video data by using the first video processing policy and the second video processing policy, and may also process part of data in the first video data; when the electronic device processes part of the data in the first video data by using the first video processing policy and the second video processing policy, the part of the video data processed by different video processing policies may be the same or different. Here, in practical applications, the user may set the video processing policy arbitrarily according to actual conditions and user requirements, for example, by performing user operations in a viewing interface of the electronic device, different or the same video processing policy is selected for video data in different display areas.
Specifically, fig. 4 is a first schematic diagram illustrating a specific application of the data processing method according to the embodiment of the present invention; as shown in fig. 4, when the electronic device presents the captured video data in the first display area, the electronic device receives a user operation implemented on the display screen, and pops up a dashed box as shown in the left diagram in fig. 4, where the size of the dashed box may be enlarged or reduced by a user operation such as dragging, stretching, etc.; after the user determines the dashed line frame, the sub-video data corresponding to the dashed line frame is presented in a second display area of the electronic device in a small-screen presentation manner, at this time, the electronic device may process the video data in the first display area by using the first video processing policy, and simultaneously process the video data in the second display area by using a second video processing policy to obtain two target sub-video data, where a first target sub-video data of the two target sub-video data is a normal video data, a second target sub-video data of the two target sub-video data is a slow-shot video data, and then obtain target video data based on the two target sub-video data, so that the target video data includes both normal video data at a playing speed and slow-shot video data, thereby enriching the user experience and improving the user experience.
In another embodiment, the electronic device may utilize at least two image capturing devices, such as at least two second cameras, to capture video data in real time in image capturing areas corresponding to the at least two second cameras, where at least two captured video data are available; further, the electronic device processes different video data acquired by the at least two second cameras by using at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data, and finally generates the target video data based on the at least two target sub-video data, that is, the video data of different presentation modes contained in the target video data are data of different video sources.
Specifically, fig. 5 is a schematic diagram illustrating a specific application of the data processing method according to the embodiment of the present invention; as shown in fig. 5, the electronic device acquires first video data by using a first camera (not shown in fig. 5) disposed therein and presents the first video data in a first display area of the electronic device in real time, and meanwhile, the electronic device acquires second video data by using an external second camera (not shown in fig. 5) and presents the second video data in a second display area of the electronic device in real time; further, the electronic device may select, according to a gesture of "zoom in" or "zoom out" of the user in the first display area and the second display area in real time, a video processing policy that matches the gesture of "zoom in" or "zoom out" for different video data presented in each display area, for example, a first video processing policy is used to process corresponding first video data in the first display area, a second video processing policy is used to process corresponding second video data in the second display area, so as to obtain at least two target sub-video data, and finally generate target video data based on the at least two target sub-video data; here, the first target sub video data (i.e., the video data processed by the first video processing policy on the first video data) in the target video data may be slow-shot video data, and the second target sub video data (i.e., the video data processed by the second video processing policy on the second video data) may be fast-shot video data.
It should be noted that, as those skilled in the art will appreciate, the gesture of "zooming in" or "zooming out" described in this embodiment may be set arbitrarily according to actual requirements, and may be the same as the gesture of "zooming in" or "zooming out" image scale in the prior art, for example.
Step 302: processing at least part of sub-data in the at least one video data acquired in real time by using at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data; the first video processing policy is different from the second video processing policy;
step 303: and generating target video data based on the at least two target sub-video data so that the target video data comprises video data capable of being presented in at least two different presentation modes.
In this embodiment, the different presentation modes may specifically represent different video characteristic parameters of the presented video data; here, the video feature parameter specifically represents the number of video frames in a unit time. That is, the different presentations may specifically characterize the speed of presentation as different, for example, in a slow shot manner, or in a fast shot manner, or in a normal shot manner.
In this way, the data processing method according to the embodiment of the present invention acquires at least one piece of video data in real time in an image acquisition area corresponding to at least one image acquisition device by using at least one image acquisition device, and processes at least part of sub-data in the at least one piece of video data acquired in real time by using at least a first video processing policy and a second video processing policy to obtain at least two pieces of target sub-video data, and further generates the target video data based on the at least two pieces of target sub-video data, so that the video data included in the target video data can be presented in different presentation manners, and therefore, the method according to the embodiment of the present invention enriches user experience and also improves user experience; meanwhile, the requirement of the user on the diversification of the presentation mode is also met.
Example two
Based on the method described in the first embodiment, the embodiment of the invention provides three specific video processing strategies; here, in the process of one-time video acquisition, the following three specific video processing strategies can be used simultaneously, and any two of the three specific video processing strategies can be selected, so that a foundation is laid for enriching the presentation mode. Further, the video processing policy described in this embodiment may be preset, or may be set at any time according to a user operation, for example, a video characteristic parameter input by a user is obtained through a user interaction interface, so as to determine the video processing policy; or, selecting the video characteristic parameters matched with the area size of the display area from a preset relation list (namely a corresponding relation list representing the area size of the display area and the video characteristic parameters) according to the area size of the display area selected by the user, and further determining a video processing strategy; here, as those skilled in the art know, the embodiment of the present invention aims to emphasize that the same video data is processed by different video processing strategies, or different video data is processed, and the target video data is generated by using the processed video data, so that the target video data contains video data that can be presented in different presentation manners, and therefore, the setting process of the video processing strategies described above is only used for explanation, and is not used to limit the present invention.
A first video processing strategy; in particular, the amount of the solvent to be used,
the electronic equipment reduces video characteristic parameters corresponding to at least part of sub data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time; that is to say, the electronic device reduces the number of video frames expected to be displayed in unit time, further delays the display time interval of adjacent video frames, and realizes a processing strategy of slow shots; further, the electronic equipment takes at least part of sub-data in the at least one video data after the video characteristic parameters are reduced as first target sub-video data; wherein the first target sub video data is included in the at least two target sub video data; the first target sub-video data may be presented in a first presentation, for example in a slow-shot presentation.
A second video processing strategy; in particular, the amount of the solvent to be used,
the electronic equipment enlarges video characteristic parameters corresponding to at least part of sub data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time; that is to say, the electronic device increases the number of video frames expected to be displayed in unit time, so as to shorten the display time interval of adjacent video frames and realize the processing strategy of the fast shot; further, the electronic equipment takes at least part of sub-data in the at least one video data with the video characteristic parameters being increased as second target sub-video data; wherein the second target sub video data is included in the at least two target sub video data; the second target sub-video data can be presented in a second presentation, for example in a fast-shot presentation.
In a specific embodiment, because the human eye masks the time domain, that is, the human eye is often unable to perceive details of a fast moving object, the embodiment reduces the data amount of video data by adopting a frame dropping processing mode based on the above features, thereby achieving the purpose of saving storage space. Specifically, the electronic equipment determines a video storage characteristic parameter based on an acquisition characteristic parameter (such as video acquisition duration and the like); the video storage characteristics represent the number of video frames to be stored in unit time; and deleting the video frames in the collected video data according to the determined video storage characteristic parameters, and increasing the video characteristic parameters in the video data collected in real time, so that the aim of reducing the data volume of the target video data is fulfilled. Here, as the video acquisition duration is prolonged, the number of video frames processed by frame dropping is increased; for example, when video acquisition is started, the number of video frames selected from currently acquired video data as target video data in unit time is N; n is a positive integer greater than or equal to 1; and N is less than or equal to the number of video frames in a unit time in the normal video, such as 20 or 30. With the extension of the video acquisition time, for example, when the video acquisition time exceeds a first threshold, assuming ten minutes, the number of video frames selected from currently acquired video data as target video data in unit time becomes M; m is a positive integer greater than or equal to 1 and is less than N; that is to say, when the video acquisition duration exceeds a certain threshold, the number of video frames selected from the currently acquired video data as the target video data in unit time is small, and the purpose of saving the storage space is further achieved. Here, the unselected video frames are discarded video frames.
The third video processing policy may be specifically an existing common video processing policy, and is not described herein again.
Therefore, the embodiment of the invention can process the same or different collected video data by using the three video processing strategies, and further obtain various video data which can be presented in different presentation modes; moreover, the three video processing strategies can be combined in pairs at will, or the three video processing strategies can be simultaneously presented in the video acquisition process, so that the requirement of a user on the diversification of presentation modes is met, the user experience is enriched, and the user experience is also improved.
EXAMPLE III
Based on the method according to the first embodiment or the second embodiment, in this embodiment, after the electronic device generates the target video data, the electronic device stores the target video data; and presenting the at least two target sub-video data in the target video data in different presentation manners in at least a first display area and a second display area of the electronic equipment based on user operation.
Here, in practical applications, the first display area and the second display area may be preset areas or display areas determined according to user operations; further, when the first display area and the second display area are display areas determined according to user operation, the area sizes of the first display area and the second display area may be arbitrarily set according to the user operation.
In an embodiment, the electronic device may further determine a presentation speed of video data presented in the region according to the region size of the display region, that is, the region size of the display region corresponds to the video processing policy; for example, video data presented in a larger display area corresponds to a normal video processing strategy, and video data presented in a smaller display area corresponds to a fast-shot or slow-shot video processing strategy.
Therefore, the perception experience of the user is enriched by presenting the video data in different presentation modes in different presentation areas, and meanwhile, the presentation areas can be areas determined according to user operation, so that the participation sense of the user is improved, the control desire of the user is met, and the user experience is further improved.
It is to be noted that the functions implemented by the methods in the first to third embodiments may be implemented by calling a program code by a processor in an electronic device, and the program code may be stored in a computer storage medium.
Example four
Based on the method described in any one of the first to third embodiments, the embodiment of the present invention provides an electronic device; specifically, as shown in fig. 6, the electronic apparatus includes:
the image acquisition unit 61 is used for acquiring at least one piece of video data in real time in an image acquisition area corresponding to at least one image acquisition device by utilizing the at least one image acquisition device;
the image processing unit 62 is configured to process at least part of sub-data in the at least one piece of video data acquired in real time by using at least a first video processing policy and a second video processing policy to obtain at least two pieces of target sub-video data; the first video processing policy is different from the second video processing policy;
a video data generating unit 63, configured to generate target video data based on the at least two target sub-video data, so that the target video data includes video data that can be presented in at least two different presentation manners.
In an embodiment, the image processing unit is further configured to turn down video feature parameters corresponding to at least part of sub-data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time; the video processing device is further used for taking at least part of sub-data in the at least one video data after the video characteristic parameters are reduced as first target sub-video data; wherein the first target sub video data is included in the at least two target sub video data; the first target sub-video data is capable of being presented in a first presentation.
In another embodiment, the image processing unit is further configured to scale up video feature parameters corresponding to at least part of sub-data in the at least one video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time; the video feature parameter is adjusted to be at least partial sub-data in the at least one video data as second target sub-video data; wherein the second target sub video data is included in the at least two target sub video data; the second target sub-video data is capable of being presented in a second presentation.
In another embodiment, the image processing unit is further configured to determine a video storage feature parameter based on the acquisition feature parameter; according to the determined video storage characteristic parameters, deleting the video frames in the collected video data, and increasing the video characteristic parameters in the video data collected in real time; the video storage characteristics represent the number of video frames to be stored in unit time.
In a specific embodiment, the electronic device further comprises a storage unit and a video display unit; wherein,
the storage unit is used for storing the target video data;
the video display unit is used for presenting the at least two target sub-video data in the target video data in different presentation modes in at least a first display area and a second display area of the electronic equipment based on user operation.
Here, it should be noted that: the description of the embodiment of the electronic device is similar to the description of the method, and has the same beneficial effects as the embodiment of the method, and therefore, the description is omitted. For technical details that are not disclosed in the embodiment of the electronic device of the present invention, those skilled in the art should refer to the description of the embodiment of the method of the present invention to understand that, for the sake of brevity, detailed description is not repeated here.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in another embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated unit of the present invention may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present invention may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (8)

1. An electronic device, characterized in that the electronic device comprises:
the image acquisition unit is used for acquiring at least one piece of video data in real time in an image acquisition area corresponding to at least one image acquisition device by utilizing at least one image acquisition device;
the image processing unit is used for processing at least part of sub-data in the at least one video data acquired in real time by utilizing at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data; the first video processing policy is different from the second video processing policy;
a video data generating unit, configured to generate target video data based on the at least two target sub-video data, so that the target video data includes video data that can be presented in at least two different presentation manners; wherein the different presentation modes are presented at different speeds;
the image processing unit is also used for determining video storage characteristic parameters based on the acquisition characteristic parameters; according to the determined video storage characteristic parameters, deleting the video frames in the collected video data; the video storage characteristics represent the number of video frames to be stored in unit time.
2. The electronic device according to claim 1, wherein the image processing unit is further configured to turn down video feature parameters corresponding to at least some sub-data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
the video processing device is further used for taking at least part of sub-data in the at least one video data after the video characteristic parameters are reduced as first target sub-video data; wherein the first target sub video data is included in the at least two target sub video data; the first target sub-video data is capable of being presented in a first presentation.
3. The electronic device according to claim 1, wherein the image processing unit is further configured to scale up video feature parameters corresponding to at least some sub-data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
the video feature parameter is adjusted to be at least partial sub-data in the at least one video data as second target sub-video data; wherein the second target sub video data is included in the at least two target sub video data; the second target sub-video data is capable of being presented in a second presentation.
4. The electronic device according to claim 2 or 3, characterized in that the electronic device further comprises a storage unit and a video display unit; wherein,
the storage unit is used for storing the target video data;
the video display unit is used for presenting the at least two target sub-video data in the target video data in different presentation modes in at least a first display area and a second display area of the electronic equipment based on user operation.
5. A method of data processing, the method comprising:
the electronic equipment utilizes at least one image acquisition device to acquire at least one piece of video data in real time in an image acquisition area corresponding to the at least one image acquisition device;
processing at least part of sub-data in the at least one video data acquired in real time by using at least a first video processing strategy and a second video processing strategy to obtain at least two target sub-video data; the first video processing policy is different from the second video processing policy;
generating target video data based on the at least two target sub-video data so that the target video data comprises video data capable of being presented in at least two different presentation modes; wherein the different presentation modes are presented at different speeds;
the processing, by using the second video processing policy, at least part of sub-data in the at least one video data acquired in real time includes:
determining video storage characteristic parameters based on the acquisition characteristic parameters; the video storage characteristics represent the number of video frames to be stored in unit time;
and deleting the video frames in the collected video data according to the determined video storage characteristic parameters.
6. The method according to claim 5, wherein the processing at least part of sub-data in the at least one video data acquired in real time by using the first video processing policy comprises:
reducing video characteristic parameters corresponding to at least part of sub data in the at least one piece of video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
taking at least part of sub-data in the at least one video data after the video characteristic parameters are reduced as first target sub-video data; wherein the first target sub video data is included in the at least two target sub video data; the first target sub-video data is capable of being presented in a first presentation.
7. The method according to claim 5, wherein the processing at least part of sub-data in the at least one video data acquired in real time by using the second video processing strategy comprises:
amplifying video characteristic parameters corresponding to at least part of sub-data in the at least one video data acquired in real time; the video characteristic parameters represent the number of video frames in unit time;
taking at least part of sub-data in the at least one video data with the video characteristic parameters being increased as second target sub-video data; wherein the second target sub video data is included in the at least two target sub video data; the second target sub-video data is capable of being presented in a second presentation.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
storing the target video data;
presenting the at least two target sub-video data in different presentation manners in at least a first display area and a second display area of the electronic device based on user operation.
CN201610270463.9A 2016-04-27 2016-04-27 A kind of data processing method and electronic equipment Active CN105898158B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610270463.9A CN105898158B (en) 2016-04-27 2016-04-27 A kind of data processing method and electronic equipment
PCT/CN2016/113980 WO2017185808A1 (en) 2016-04-27 2016-12-30 Data processing method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610270463.9A CN105898158B (en) 2016-04-27 2016-04-27 A kind of data processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN105898158A CN105898158A (en) 2016-08-24
CN105898158B true CN105898158B (en) 2019-08-16

Family

ID=56701852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610270463.9A Active CN105898158B (en) 2016-04-27 2016-04-27 A kind of data processing method and electronic equipment

Country Status (2)

Country Link
CN (1) CN105898158B (en)
WO (1) WO2017185808A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898158B (en) * 2016-04-27 2019-08-16 努比亚技术有限公司 A kind of data processing method and electronic equipment
CN113079336A (en) * 2020-01-03 2021-07-06 深圳市春盛海科技有限公司 High-speed image recording method and device
CN112199987A (en) * 2020-08-26 2021-01-08 北京贝思科技术有限公司 Multi-algorithm combined configuration strategy method in single area, image processing device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611108A (en) * 2015-12-18 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8072482B2 (en) * 2006-11-09 2011-12-06 Innovative Signal Anlysis Imaging system having a rotatable image-directing device
JP5155279B2 (en) * 2009-10-29 2013-03-06 株式会社日立製作所 Centralized monitoring system and centralized monitoring method using multiple surveillance cameras
CN103926785B (en) * 2014-04-30 2017-11-03 广州视源电子科技股份有限公司 Method and device for realizing double cameras
JP6396682B2 (en) * 2014-05-30 2018-09-26 株式会社日立国際電気 Surveillance camera system
JP6323183B2 (en) * 2014-06-04 2018-05-16 ソニー株式会社 Image processing apparatus and image processing method
CN105208422A (en) * 2014-06-26 2015-12-30 联想(北京)有限公司 Information processing method and electronic device
CN104967802B (en) * 2015-04-29 2019-07-19 努比亚技术有限公司 The method for recording and device of mobile terminal and its plurality of regions on screen
CN105898158B (en) * 2016-04-27 2019-08-16 努比亚技术有限公司 A kind of data processing method and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611108A (en) * 2015-12-18 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment

Also Published As

Publication number Publication date
CN105898158A (en) 2016-08-24
WO2017185808A1 (en) 2017-11-02

Similar Documents

Publication Publication Date Title
CN106909274B (en) Image display method and device
CN105468158B (en) Color adjustment method and mobile terminal
CN105260083A (en) Mobile terminal and method for realizing split screens
CN106791480A (en) A kind of terminal and video skimming creation method
CN106657782B (en) Picture processing method and terminal
CN105430258B (en) A kind of method and apparatus of self-timer group photo
CN105338244B (en) A kind of information processing method and mobile terminal
CN106851114B (en) Photo display device, photo generation device, photo display method, photo generation method and terminal
CN107018334A (en) A kind of applied program processing method and device based on dual camera
CN106993134B (en) Image generation device and method and terminal
CN106373110A (en) Method and device for image fusion
CN105898158B (en) A kind of data processing method and electronic equipment
CN105262953B (en) A kind of mobile terminal and its method of control shooting
CN105049916B (en) A kind of video recording method and device
CN106131305A (en) A kind of control method and electronic equipment
CN106649753B (en) Data processing method and electronic equipment
CN106020677A (en) Information processing method and mobile terminal
WO2017185800A1 (en) Control method, electronic device, and storage medium
CN106453915A (en) Information processing method and mobile terminal
CN105141834A (en) Device and method for controlling picture shooting
CN105100607B (en) A kind of filming apparatus and method
CN106534503A (en) Information processing method and electronic device
CN109040581B (en) Bullet screen information display method, equipment and computer-storable medium
CN106060269A (en) Data processing method and electronic device
CN105049640B (en) A kind of apparatus and method for realizing focusing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant