[go: up one dir, main page]

HK1197943A - Touch event model programming interface - Google Patents

Touch event model programming interface Download PDF

Info

Publication number
HK1197943A
HK1197943A HK14111484.4A HK14111484A HK1197943A HK 1197943 A HK1197943 A HK 1197943A HK 14111484 A HK14111484 A HK 14111484A HK 1197943 A HK1197943 A HK 1197943A
Authority
HK
Hong Kong
Prior art keywords
touch
lists
touches
list
event
Prior art date
Application number
HK14111484.4A
Other languages
Chinese (zh)
Other versions
HK1197943B (en
Inventor
R.威廉姆森
G.D.博尔辛加
T.奥默尼克
Original Assignee
苹果公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苹果公司 filed Critical 苹果公司
Publication of HK1197943A publication Critical patent/HK1197943A/en
Publication of HK1197943B publication Critical patent/HK1197943B/en

Links

Abstract

This invention is a touch event model programming interface. One or more touch input signals can be obtained from a touch sensitive device. A touch event model can be used to determine touch and/or gesture events based on the touch input signals. The touch and gesture events can be associated with touch input signals generated from different regions of a web page displayed on the touch sensitive device. Access can be provided to at least one touch or gesture event through a programming interface.

Description

Touch event model programming interface
RELATED APPLICATIONS
The application is a divisional application of an invention patent application with an international application date of 2009-03, an international application number of PCT/US2009/035874, a date of 2009-04-02 and a national application number of 200980000013.6 entering China.
Technical Field
The present subject matter relates generally to web browsing services.
Background
Web pages (Web pages) are created using a markup language that provides a means for describing the structure of text-based information in a document and supplementing the text with interactive forms, embedded images, and other objects. One popular markup language is the hypertext markup language (HTML), which is written in the form of markup surrounded by angle brackets. HTML can describe the appearance and semantics of a web page, and can include embedded scripting language code (e.g., HTML can describe the appearance and semantics of a web page, such as HTML can be used to provide) The code may affect the behavior of web browsers and other HTML processors.Developers are provided with the ability to add mouse event handlers (handlers) or event listeners (listeners) to web pages. These mouse event handlers may be assigned to specific regions in the web page and may be configured to receive mouse events in these regions, such as mouse up (mouse up) events or mouse down (mouse down) events.
In contrast, for web pages that are navigated using touch sensitive devices, these web pages typically need to respond to touch events generated by a user touching the web page with one or more fingers and making gestures. Conventional mouse event handlers cannot properly interpret these touch events. Thus, these touch events require a different model of touch events to properly interpret the touch events and allow developers to take full advantage of the capabilities of the touch sensitive display or device.
Disclosure of Invention
One or more touch input signals may be acquired from a touch sensitive device. From these touch input signals, touch and/or gesture events may be determined using a touch event model. These touch and gesture events may be associated with touch input signals generated from different regions of a web page displayed on the touch sensitive device. Access to at least one touch or gesture event may be provided through a programming interface.
In some implementations, a method in a web browser includes: receiving a rotation value associated with the gesture event; and dynamically rotating the web page element associated with the gesture event in the web browser according to the rotation value, wherein the rotation value is a relative increment in degrees. The gesture event may include two or more touch events.
In some implementations, a method in a web browser includes: receiving a zoom value associated with a gesture event; dynamically resizing, in the web browser, a web page element associated with the gesture event based on the zoom value, wherein the zoom value is a relative increment in units of document pixels. The gesture event may be associated with two or more touch events.
In some implementations, a method in a web browser includes: a touch list is received, the touch list including touch data for identifying one or more touches on a web page, wherein the touch data includes a touch identifier and at least one set of touch position coordinates, wherein the touch list further includes data for referencing a touch event target associated with each touch, wherein the at least one set of touch position coordinates includes a set of client coordinates, a set of page coordinates, and a set of screen coordinates. The touch data may identify one or more changed touches.
Other embodiments related to systems, methods, and computer-readable media are also disclosed herein.
Drawings
FIGS. 1A and 1B illustrate exemplary web page documents.
FIG. 2 illustrates an exemplary processing stack of a multi-touch capable device.
FIG. 3 is a flow diagram of an exemplary process for processing touch events.
FIG. 4 illustrates an exemplary multi-touch capable device.
FIG. 5 is a block diagram of an exemplary network operating environment for the multi-touch capable device of FIG. 4.
FIG. 6 is a block diagram of an example implementation of the multi-touch capable device of FIG. 4.
Detailed Description
Example Web Page Structure and DOM
FIG. 1A shows an exemplary web page 100 that may be displayed on a browser. The browser may be hosted on a portable device, such as the multi-touch capable device 400 in FIG. 4. One or more units, namely unit 102 ("unit 1"), unit 104 ("unit 2"), and unit 106 ("unit 3"), may be displayed on the web page 100. These elements 102, 104, 106 may correspond to various areas in the web page 100 selectable by the user, and additional functionality may be provided as a result of the selection. These elements may correspond to, for example, buttons on the web page 100. Furthermore, the elements may be nested such that one element comprises another element. For example, element 104 includes element 108. In the example shown, the unit 108 is, for example, an eraser control nested inside the unit 104, and the unit 104 may be, for example, a user interface of a media player.
In some implementations, the user can use a finger instead of a mouse to perform various functions in conjunction with the elements on the web page 100. For example, a user may touch an element of the web page 100 using the touch-sensitive display 402 shown in FIG. 4. In one example, a user may select a cell by touching the cell with one or more fingers and/or by making a gesture like a swipe (swipe), pinch, or rotate (rotate) motion. To recognize touch input signals, certain areas of the web page 100 may be associated with a touch event handler. This process may be implemented in the DOM as well as in the embedded scripting language, as will be described with reference to FIG. 1B.
FIG. 1B is an exemplary DOM150 associated with the web page 100. The DOM150 provides a structural representation of the web page 1O0, and describes the web page content as a set of scripted language (e.g., a script language)) The object of the interpretation. In some embodiments, the DOM150 provides access to the web page structure by mapping the elements 102, 104, 106, 108 in the web page 100 to individual nodes of a tree. For example, element 102 corresponds to node 154. Cell 104 corresponds to node 156. Element 106 corresponds to node 158. Element 108 corresponds to node 160. Root node 152 corresponds to the entire web page 100.
In some implementations, one or more elements 102, 104, 106, 108 in the web page 100 can be associated with one or more respective touch event handlers by associating respective nodes in the DOM150 with the touch event handlers. The touch event handler may be inserted into an HTML tag of the web page 100, and the touch event handler may run a scripting language to perform an action when, for example, a user touches or gestures inside a certain element on the web page 100. For example,can work with the DOM150 to attach actions to different touch events.
In some implementations, one or more units 102, 104, 106, 108 can receive touch inputs detected by an event handler or listener. As described with reference to FIG. 2, the touch input may be detected and processed into touch events by a touch event model, which may be implemented in one or more layers of a software stack. The touch event may be further processed by the web page 100. The touch events may be in a format (e.g., properties) that are easier to use in an application than the raw touch input signals generated by the touch sensitive device. For example, each touch event may include a set of coordinates at which the touch is currently occurring.
Each element of the Web page 100 and its associated event handler can receive, process, and manipulate touch events. For example, if driver 202 (FIG. 2) senses touch point 110 associated with element 102 or touch point 112 associated with element 104, an event handler associated with element 102 or 104 may receive a separate touch event indicating that the element has been touched and may optionally send the touch event to web page 100 for further processing. In some implementations, if the area of the web page 100 touched does not correspond to an event handler, the input may be processed by a browser in the application layer 214, rather than the web page 100.
In some implementations, touch events can be detected in the DOM150 on a finger-by-finger by node basis. For example, a user may touch the touch-sensitive display 402 at touch point 110 and touch point 112 at substantially the same time, and the touch event model may detect two separate touch events. Because each node 102 and 104 in the DOM150 is associated with a separate touch event handler, separate touch events can be detected for touch point 110 and touch point 112.
In some implementations, the touch event can be delivered to the web page 100 as an EventTarget (event target). Some examples of touch events may include touch start (touchstart), touch move (touchmove), touch end (touchend), and touch cancel (touchcancel). In addition, other touch events are possible. Touch onset is a touch event that is detected when a user first places a finger on the touch-sensitive display 402 and within an area of the web page 100 associated with the event handler. When the user moves their finger around on the web page 100, one or more touch movement events may be detected. When the user lifts their finger off the web page 100, a touch end event is detected. Touch cancellation may be detected when the system interrupts normal event processing. For example, a touch cancel event may occur when the touch-sensitive display 402 is locked to prevent inadvertent touches.
In some implementations, gesture events can also be detected by combining two or more touch events. Similar to the touch event, the gesture event (getureevent) may also be delivered to the web page 100 as an event target (EventTarget). Some examples of gesture events may be a gesture start (getturestart), a gesture change (getturchange), and a gesture end (gettureend). The gesture event may include zoom and/or rotation information. The rotation information may include a rotation value that is a relative increment in degrees. The cells on the web page 100 may be dynamically rotated according to the rotation value. The scaling information may include a scaling value that is a relative increment in units of document pixels. The cells on the web page 100 associated with the gesture event may be dynamically sized according to the zoom value. In addition, other gesture events are possible.
In some implementations, a touch list can be received that includes touch event data identifying one or more touches on the web page 100. The touch event data may include a touch identifier and at least one set of touch location coordinates. In addition, the touch list may also include touch event data relating to touch event targets associated with each touch. In some embodiments, such a set of touch location coordinates may include client coordinates, page coordinates, and screen coordinates. In some implementations, the touch event data can identify one or more changed touches.
In some implementations, a GestureEvent may be sent to the web page 100 before the TouchEvent. For example, if a user places fingers on touch point 110 and touch point 112 and then uses the fingers to make a clockwise or counterclockwise rotation gesture on the touch-sensitive display, the touch event model detects the multiple touch events and combines the touch events into one gesture event. The gesture event may then be sent to the web page 100, followed by the touch events that are combined to form the gesture event. In this way, the developer may access the gesture events and each individual touch event in the gesture events, thereby providing greater flexibility to the developer in developing the web application.
In some embodiments, the touch events are received in the following order: a touch start event, one or more touch move events, and a touch end or touch cancel event. Using the example of FIG. 1A, when a user contacts touch point 110, a first touch event handler associated with element 102 will detect a first touch start event. When the user contacts touch point 112, a second touch event handler associated with element 104 detects a second touch start event. When the user rotates his finger without lifting it, the first and second touch event handlers will detect a touch movement event, and the touch movement event can be interpreted by the touch event model as a rotation gesture event. When the user finishes rotating and lifts their finger off the web page 100, the first and second touch event handlers will detect a touch end event. All or some of the touch events may be made available to the developer through a touch event Application Programming Interface (API). The touch API may be available to developers as a Software Development Kit (SDK) or as part of an application (e.g., as part of a browser kit). The touch event API may rely on other services, frameworks, and operating systems to perform its various functions. As described with reference to FIG. 2, these services, frameworks, and operating systems may be part of a software or processing stack where touch events are associated with attributes that may be inserted into a document to define an event action in an application.
Exemplary IDL
The illustrated touch event model will now be described in an Interface Description Language (IDL). The functionality and data structures of the IDL may be accessed through an API by a web designer or application developer. Access to touch events and/or gesture events may be associated with attributes that may be inserted into a markup language document (e.g., HTML, XML) to define event actions in an application. For example, the attributes may be inserted into one or more HTML tags of an HTML document to generate a web page that is displayed on the touch-sensitive display 402. The event action may include running an embedded script (e.g., such as)。
The following is an example of an HTML code fragment that processes a touch event by using the illustrative IDL as above. For example, the following HTML shows touch event listeners TouchStart and GesturStart added to the cell in HTML code:
this.element.addEventListener(′touchstart′,function(e){return self.onTouchStart(e)},false);
this.element.addEventListener(′gesturestart′,function(e){return self.onGestureStart(e)},false);
the HTML code corresponding to the above IDL may be as follows:
exemplary processing Stack for Multi-touch device
FIG. 2 is a diagram of a processing stack of an exemplary multi-touch capable device. The touch event model described above may be implemented in one or more regions that handle the stack and the user's various types of resources in the stack. The layers of hardware 200 may include various hardware interface elements, such as a touch-sensitive or enabled device or a touch-sensitive display. The touch sensitive device can include a display and a panel for sensing multiple touches simultaneously. The hardware layer 200 may also include an accelerometer for detecting an orientation (e.g., portrait, landscape) of the touch-sensitive display or device. Thus, the signal indicating the orientation may be used by the touch event model to scale the web page for optimal display.
One or more drivers in driver layer 202 may communicate with hardware 200. For example, these drivers may receive and process touch input signals generated by a touch-sensitive display or device in hardware layer 200. The kernel Operating System (OS)204 may communicate with one or more drivers. The core OS204 may process raw input data received from one or more drivers. In some embodiments, these drivers may be considered part of the core OS 204.
A set of OS Application Programming Interfaces (APIs) 206 may communicate with the core OS 204. These APIs may be a set of APIs that are typically included with an operating system (e.g., Linux or UNIX APIs). One set of core base APIs 208 may use the OS APIs 206 and one set of base APIs 210 may use the core base APIs 208.
The Web page Software Development Kit (SDK)210 may include a set of APIs designed for use by applications running on the device. While the touch event API may be included in the Web page SDK210, for example. The APIs of the Web page SDK210 can utilize the base API 208. The Web page SDK210 may include, for example, a Web page SDK composed ofWeb KIT (Web toolkit) provided. The Web page SDK210 may be provided as an API or may be made available through an application, such as by a Web browserProvided withSuch as a browser.
An application 214 running on the device may utilize the API of the Web page SDK210 to create a Web page. The API of the web page SDK210, in turn, may communicate with the underlying elements and, thus, ultimately, the touch-sensitive display or device and various other user interface hardware. While each layer may utilize the layer below it, this is not always required. For example, in some embodiments, the application 214 may communicate with the OS API206 on an irregular basis.
Exemplary touch event processing
FIG. 3 is a flow diagram of a process 300 for providing access to touch and/or gesture events through an API. Process 300 may begin by acquiring one or more touch input signals (302). These touch input signals may be acquired from a touch-sensitive display or device. Using the touch event model, touch events and/or gestures may be determined from the touch input signals (304). These touch events may be associated with regions of a web page displayed on a touch-sensitive display or device. For example, the touch-sensitive display may be a display on a mobile phone and the touch-sensitive device may be a touch-sensitive pad on a notebook computer. Access to the touch events and/or gesture events may be provided through a programming interface (306). For example, for the HTML fragment described above with reference to FIG. 2, the fragment may be inserted into an HTML document by a web developer to provide the developer with access to touch and/or gesture events. The touch events and/or gesture events may also be further processed by code in the HTML document to initiate event actions (306).
Overview of Mobile devices
FIG. 4 is a block diagram of an exemplary multi-touch capable device 400. In some implementations, the multi-touch capable device 400 includes a touch sensitive display 402. The touch sensitive display 402 may implement Liquid Crystal Display (LCD) technology, light emitting polymer display (LPD) technology, or some other display technology. The touch-sensitive display 402 may be sensitive to tactile (tactile) and/or tactile (tactile) contact with a user.
In some implementations, the touch-sensitive display 402 can include a multi-touch-sensitive display 402. For example, touch-sensitive display 402 can process multiple simultaneous touch points, including processing data related to the pressure, angle, and/or position of each touch point. Such processing uses multiple fingers, chording (chording), and other interactions to facilitate gestures and interactions. In addition, other touch-sensitive display technologies may also be used, such as displays that make contact using a stylus or other pointing device. Some examples of multi-touch sensitive display technology are described in U.S. Pat. Nos. 6,323,846, 6,570,557, 6,677,932, and U.S. patent publication 2002/0015024A1, each of which is incorporated herein by reference in its entirety. In some implementations, the multi-touch capable device 400 can display one or more graphical user interfaces on the touch sensitive display 402 to provide a user with access to various system objects and to convey information to the user.
Exemplary functionality of a device with Multi-touch capability
In some implementations, the multi-touch capable device 400 can implement the functionality of a variety of devices, such as a telephone device, an email device, a network data communication device, a wi-Fi base station device, and a media processing device. In some implementations, the multi-touch capable device 400 can include a web browser 404 for displaying a web page (e.g., the web page 100). The touch-sensitive display 402 may receive touch input signals generated on the web page 100, and the touch model as described above may be used to determine touch and/or gesture events from the touch input signals. In some implementations, the multi-touch capable device 400 can implement network distributed functionality. In some implementations, the touch-sensitive display 402 can be lock-operated (1ockdown) when the multi-touch capable device 400 is proximate to the user's ear. Such a lock operation will result in a touch cancel event as described with reference to fig. 1B.
In some implementations, accelerometer 472 may be used to detect movement of multi-touch capable device 400, as indicated by directional arrow 474. Accordingly, the display objects and/or media may be presented according to a detected orientation, e.g., portrait or landscape. In some implementations, the multi-touch capable device 400 may include circuitry and sensors to support position determination capabilities, such as those provided by a Global Positioning System (GPS) or other positioning system (e.g., systems using Wi-Fi access points, television signals, cellular grids, Uniform Resource Locators (URLs)). In some embodiments, a positioning system (e.g., a GPS receiver) may be integrated into the multi-touch capable device 400 or provided as a stand-alone device that can be coupled to the multi-touch capable device 400 through an interface to provide access to location-based services. Such a multi-touch capable device 400 may also include one or more wireless communication subsystems.
In some embodiments, a port device such as a Universal Serial Bus (USB) port or docking port, or some other wired port connection may be included. The port device, for example, may be used to establish a wired connection with other computing devices, such as other multi-touch capable devices 400, network access devices, personal computers, printers, or other processing devices capable of receiving and/or transmitting data. In some embodiments, the port device allows the multi-touch capable device 400 to synchronize with the host device using one or more protocols, such as TCP/IP, HTTP, UDP, and any other known protocols.
Network operating environment
FIG. 5 is a block diagram of an exemplary network operating environment 600 for the multi-touch capable device 400 of FIG. 4. For example, the multi-touch capable device 400 of FIG. 4 can communicate in data communication over one or more wired and/or wireless networks 510. For example, the wireless network 512 may be, for example, a cellular network that may communicate with a Wide Area Network (WAN)514, such as the Internet, using a gateway 516. Likewise, the access point 518 may be a wireless access point, such as an 802.11g wireless access point, which may provide communication access to the wide area network 514. In some embodiments, voice and data communications may be established via the wireless network 512 and the access point 518. For example, the multi-touch capable device 400a can place and receive phone calls (e.g., using VoIP protocols), send and receive email messages (e.g., using POP3 protocols), and retrieve electronic documents and/or streams, such as web pages, photographs, and videos, via the wireless network 512, the gateway 516, and the wide area network 514 (e.g., using TCP/IP or UDP protocols). Likewise, the multi-touch capable device 400b may place and receive phone calls, send and receive email messages, and retrieve electronic documents via the access point 518 and the wide area network 514. In some implementations, the multi-touch capable device 400 can be physically connected to the access point 518 using one or more cables, and the access point 518 can be a personal computer. In such a configuration, the multi-touch capable device 400 may be referred to as a "tethered" device.
The multi-touch capable devices 400a and 400b may also establish communication by other means. For example, the multi-touch capable device 400a may communicate with other wireless devices, such as with other multi-touch capable devices 400, cellular phones, and so forth, via the wireless network 512. Likewise, the multi-touch capable devices 400a and 400b can establish point-to-point communications 520, such as a personal area network, by using one or more communication subsystems, such as Bluetooth as shown in FIG. 4TMA communication device 488. In addition, other communication protocols and topologies may be implemented.
For example, multi-touch capable device 400 can communicate with network resource 530 via one or more wired and/or wireless networks 510. For example, the network resource may be a web server for delivering web pages that can be touched via a touch model, as described with reference to FIGS. 1-2.
Other services may also be provided including a software update service that automatically determines whether a software update exists for the software on the multi-touch capable device 400 and then downloads the software update to the multi-touch capable device 400 where the software may be manually or automatically unpacked and/or installed on the device 400.
Exemplary Mobile device architecture
FIG. 6 is a block diagram 600 of an example implementation of the multi-touch capable device 400 of FIG. 4. The multi-touch capable device 400 can include a memory interface 602, one or more data processors, image processors and/or central processing units 604, and a peripheral interface 606. The memory interface 602, the one or more processors 604, and/or the peripherals interface 606 can be discrete components or can be integrated in one or more integrated circuits. In a multi-touch capable device 400, the various elements may be coupled by one or more communication buses or signal lines.
Sensors, devices, and subsystems can be coupled to peripherals interface 606 to facilitate multiple functions. For example, motion sensor 610, light sensor 612, and proximity sensor 614 may be coupled to peripherals interface 606 to facilitate the orientation, lighting, and proximity functions described with respect to fig. 4. Other sensors 616 may also be coupled to the peripheral interface 606, such as a positioning system (e.g., a GPS receiver), a temperature sensor, a biometric sensor, or other sensing device, to facilitate related functions.
Camera subsystem 620 and optical sensor 622, which may be, for example, a charge-coupled device (CCD) or a complementary metal-oxide semiconductor (CMOS) optical sensor, may be used to facilitate implementation of camera functions such as recording photographs and video clips.
Communication functions can be facilitated through one or more wireless communication subsystems 624, which can include radio frequency receivers and transmitters and/or optical (e.g., infrared) receivers and transmitters. The particular design and implementation of communication subsystem 624 may be dependent upon the one or more communication networks with which multi-touch capable device 400 is intended to operate. For example, a multi-touch capable device 400 may include a device designed to communicate via a GSM network, GPRS network, EDGE network, Wi-Fi or WiMax network, and BluetoothTMA communication subsystem 624 operating over a network. In particular, wireless communication subsystem 624 may include a host protocol such that device 500 may be configured as a base station for other wireless devices.
An audio subsystem 626 may be coupled to a speaker 628 and a microphone 630 to facilitate voice-enabled functions, such as voice recognition, voice replication, digital recording, and telephony functions.
The I/O subsystem 640 can include a touchscreen controller 642 and/or one or more other input controllers 644. The touch screen controller 642 may be coupled to a touch screen 646. For example, the touch screen 646 and touch screen controller 642 can detect contact and movement or pauses made therewith using any of a variety of touch sensing technologies, including but not limited to capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other means for determining one or more points of contact with the touch screen 646.
One or more other input controllers 644 may be coupled to other input/control devices 648, such as one or more buttons, rocker switches, thumbwheels, infrared ports, USB ports, and/or pointing devices such as styluses. The one or more buttons (not shown) may include up/down buttons for controlling the volume of the speaker 628 and/or the microphone 630.
In one embodiment, the touch screen 646 may be unlocked by holding down a button for a first duration; and if the button is held down for a second duration, wherein the second duration is longer than the first duration, then power may be turned on or off to the multi-touch capable device 400. The user can customize the functionality of one or more buttons. Further, the touch screen 646 may also be used to implement virtual or soft buttons and/or a numeric keypad or keyboard, for example.
In some implementations, the multi-touch capable device 400 can present recorded audio and/or video files, such as MP3, AAC, and MPEG files. In some embodiments, the multi-touch capable device 400 may include the functionality of an MP3 player, such as an iPodTM. Thus, the multi-touch capable device 400 may include a 32 pin connector that is compatible with the iPod. Other input/output and control devices may also be used.
The memory interface 602 may be coupled with a memory 650. The memory 650 may include high-speed random access memory and/or non-volatile memory, such as one or more magnetic disk storage devices, one or more optical storage devices, and/or flash memory (e.g., NAND, NOR). The memory 650 may store an operating system 652, such as Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks. The operating system 652 may include instructions for handling basic system services and for performing hardware dependent tasks.
Memory 650 may also store communication instructions 654 to facilitate communications with one or more additional devices, one or more computers, and/or one or more servers. The memory 650 may include graphical user interface instructions 656 for facilitating graphical user interface processing; sensor processing instructions 658 to facilitate sensor-related processing and functions; phone instructions 660 to facilitate phone-related processes and functions; electronic messaging instructions 662 for facilitating electronic messaging-related processes and functions; web browsing instructions 664 to facilitate web browsing-related processes and functions; media processing instructions 666 to facilitate media processing-related processes and functions; GPS/navigation instructions 668 for facilitating GPS and navigation related processes and functions; camera instructions 670 for facilitating camera-related processes and functions; and/or other messaging instructions 672 to facilitate processes and functions as described with reference to fig. 1-5.
Each of the instructions and applications identified above may correspond to a set of instructions for performing one or more functions described above. These instructions need not be implemented as separate software programs, procedures or modules. Memory 650 may include additional instructions or fewer instructions. Further, various functions of the multi-touch capable device 400 can be implemented in hardware and/or software, including in one or more signal processing and/or application specific integrated circuits.
The features described may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. These features can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by a programmable processor; additionally, method steps may be performed by a programmable processor executing a program of instructions to perform functions of the described embodiments by performing operations on input data and generating output.
Advantageously, the described features can be implemented in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages (e.g., Objective-C, Java), and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Processors suitable for the execution of a program of instructions include, by way of illustration, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores in any kind of computer. Generally, a processor will receive instructions and data from a read-only memory, a random access memory, or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Typically, a computer also includes, or is operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and an optical disc. Suitable storage devices tangibly embodying computer program instructions and data include any form of non-volatile memory, examples of which include: semiconductor memory devices such as EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and memory may be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
These features may be implemented in a computer system that includes a back-end element, such as a data server, or that includes a middleware element, such as an application server or an Internet server, or that includes a front-end element, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The system elements may be connected by any form of digital data communications media, such as a communications network. Examples of communication networks include LANs, wans, and computers and networks forming the internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Various embodiments have been described herein. It will be understood that various modifications are possible. For example, further embodiments may be constructed by combining, deleting, modifying or supplementing elements of one or more embodiments. As another example, the logic flows depicted in the figures do not necessarily require the particular order or sequence shown to achieve desirable results. In addition, other steps may be provided, steps may be eliminated, from the described flows, and other elements may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.

Claims (14)

1. A method for use at an electronic device with a touch-sensitive display, the method comprising:
receiving a hypertext markup language (HTML) document, the HTML document including an embedded script; and
rendering and displaying the HTML document and executing the embedded script at the electronic device, including:
accessing a touch event, the touch event comprising a plurality of touch lists; and
after detecting one or more touches on the touch-sensitive display:
identifying one or more changed touches using touch data included in a touch list of the plurality of touch lists; and
further executing the embedded script according to the touch data in the touch list.
2. The method of claim 1, wherein the touch event is for a first respective display area of the HTML document, the plurality of touch lists includes a first touch list and a target touch list, the first touch list lists all touches on the touch-sensitive display, and the target touch list lists only touches in the first respective display area of the HTML document.
3. The method of claim 1, wherein the touch event comprises a first touch event for a first respective display area of the HTML document, an
The method further comprises the following steps:
accessing a second touch event for a second corresponding display area of the HTML document, the second touch event comprising a second plurality of touch lists; and
after detecting the one or more touches:
updating the second touch event with touch data in two or more touch lists of the second plurality of touch lists; and
further executing the embedded script according to the touch data in at least one of the second plurality of touch lists,
wherein the second plurality of touch lists includes a second touch list and a second target touch list, the second touch list listing all touches on the touch-sensitive display, and the second target touch list listing only touches in the second corresponding display area of the HTML document.
4. The method according to any one of claims 1-3, comprising:
accessing the touch event as a touch event corresponding to a respective element of the HTML document according to instructions in the embedded script corresponding to the respective element, the touch event including the plurality of touch lists; and
after detecting one or more touches, updating the touch event with touch data in two or more of the touch lists.
5. The method of any of claims 1-3, wherein the plurality of touch lists in the touch event includes changing a touch list for identifying one or more changed touches, the method further comprising:
after detecting the change in at least one of the one or more touches:
updating the list of changing touch touches in the touch event with touch data identifying at least one changing touch; and
and further executing the embedded script according to the updated change touch list.
6. The method of any of claims 1-3, wherein the plurality of touch lists in the touch event includes a target touch list for identifying one or more touches to a target, the method further comprising:
after detecting a change to at least one of the one or more touches of the target:
updating the list of target touch touches in the touch event with touch data identifying at least one touch to the target; and
and further executing the embedded script according to the updated target touch list.
7. The method of any of claims 1-3, further comprising: updating two or more touch lists of the plurality of touch lists after detecting a change in at least one of the one or more touches.
8. An apparatus for use at an electronic device with a touch-sensitive display, the apparatus comprising:
means for receiving a hypertext markup language (HTML) document, the HTML document including an embedded script; and
means for rendering and displaying the HTML document and executing the embedded script at the electronic device, comprising:
means for accessing a touch event, the touch event comprising a plurality of touch lists; and
an apparatus enabled after detecting one or more touches on the touch-sensitive display, comprising:
means for identifying one or more changed touches using touch data included in a touch list of the plurality of touch lists; and
means for further executing the embedded script according to the touch data in the touch list.
9. The device of claim 8, wherein the touch event is for a first respective display area of the HTML document, the plurality of touch lists includes a first touch list and a target touch list, the first touch list lists all touches on the touch-sensitive display, and the target touch list lists only touches in the first respective display area of the HTML document.
10. The apparatus of claim 8, wherein the touch event comprises a first touch event for a first respective display area of the HTML document, an
The apparatus further comprises:
means for accessing a second touch event for a second respective display area of the HTML document, the second touch event comprising a second plurality of touch lists; and
an apparatus enabled after detecting the one or more touches, comprising:
means for updating the second touch event with touch data in two or more touch lists of the second plurality of touch lists; and
means for executing the embedded script further according to the touch data in at least one of the second plurality of touch lists,
wherein the second plurality of touch lists includes a second touch list and a second target touch list, the second touch list listing all touches on the touch-sensitive display, and the second target touch list listing only touches in the second corresponding display area of the HTML document.
11. The apparatus according to any one of claims 8-10, comprising:
means, enabled according to instructions in the embedded script that correspond to respective elements of the HTML document, for accessing the touch event as a touch event that corresponds to the respective document elements, the touch event comprising the plurality of touch lists; and
means, enabled after detecting one or more touches, for updating the touch event with touch data in two or more of the touch lists.
12. The apparatus of any of claims 8-10, wherein the plurality of touch lists in the touch event includes a changing touch list for identifying one or more changed touches, and the apparatus comprises:
an apparatus enabled after detecting a change in at least one of the one or more touches, comprising:
means for updating the list of changing touch touches in the touch event with touch data identifying at least one changing touch; and
means for further executing the embedded script according to the updated change touch list.
13. The apparatus of any of claims 8-10, wherein the plurality of touch lists in the touch event comprises a target touch list for identifying one or more touches to a target, and the apparatus comprises:
means enabled after detecting a change to at least one of the one or more touches of the target:
means for updating the list of target touch touches in the touch event with touch data identifying at least one touch to the target; and
means for further executing the embedded script according to the updated target touch list.
14. The apparatus according to any one of claims 8-10, comprising:
means, enabled after detecting a change in at least one of the one or more touches, for updating two or more touch lists of the plurality of touch lists.
HK14111484.4A 2008-03-04 2014-11-13 Touch event model programming interface HK1197943B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/042,299 2008-03-04

Publications (2)

Publication Number Publication Date
HK1197943A true HK1197943A (en) 2015-02-27
HK1197943B HK1197943B (en) 2019-01-25

Family

ID=

Similar Documents

Publication Publication Date Title
JP7111772B2 (en) Touch event model programming interface
CN104794105B (en) It is a kind of for handling the method, system, touch-sensitive device of touch event
JP5638584B2 (en) Touch event model for web pages
HK1197943A (en) Touch event model programming interface
HK1197943B (en) Touch event model programming interface
HK1196684A (en) Touch event model programming interface
HK1186264A (en) Touch event model for web pages
HK1196684B (en) Touch event model programming interface
HK1212061B (en) Method, system and touch sensitive device for processing touch events
CN103809908B (en) Touch event model programming interface
HK1212062B (en) Touch event processing for web pages
HK1186264B (en) Touch event model for web pages
HK1165888A (en) Touch event model programming interface