[go: up one dir, main page]

HK1189289A - Virtual surface compaction - Google Patents

Virtual surface compaction Download PDF

Info

Publication number
HK1189289A
HK1189289A HK14102287.2A HK14102287A HK1189289A HK 1189289 A HK1189289 A HK 1189289A HK 14102287 A HK14102287 A HK 14102287A HK 1189289 A HK1189289 A HK 1189289A
Authority
HK
Hong Kong
Prior art keywords
composition system
computing device
composition
update
active
Prior art date
Application number
HK14102287.2A
Other languages
Chinese (zh)
Inventor
Fink Reiner
E. Blanco Leonardo
Ergan Cenk
Warren Priestley Joshua
Patricia Moncayo Silvana
D. Pelton Blake
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Publication of HK1189289A publication Critical patent/HK1189289A/en

Links

Description

Virtual surface compaction
Background
The variety of computing device configurations is continuously increasing. From a conventional desktop personal computer to a mobile phone, game console, set-top box, tablet computer, etc., the functionality that can be derived from each of these configurations can vary greatly.
Thus, conventional display technology developed for one configuration may not necessarily be well suited for another configuration. For example, previous display technologies for devices with large amounts of memory resources may not be suitable for less resourced devices.
Disclosure of Invention
Virtual surface techniques are described herein. These techniques include initialization and batch processing in support of: updates, use of update and lookaside lists (lookaside list), use of gutters (setters), blending and BLT operations, surface optimization techniques such as push down and enumeration (enumeration), clustering (clustering), use of mesh and occlusion (occlusion) management techniques.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Drawings
The detailed description is described with reference to the accompanying drawings. In the drawings, one or more digits on the far left side of a reference number identify the drawing in which the reference number first appears. The use of the same reference numbers in different instances in the description and the figures may indicate similar or identical items.
FIG. 1 is an illustration of an environment in an example implementation that is operable to perform virtual surface techniques described herein.
FIG. 2 depicts an example embodiment of resizing a virtual surface.
FIG. 3 depicts an example implementation showing interaction between an application and a logical surface of a virtual surface.
FIG. 4 depicts an exemplary embodiment of a composition system (composition system) showing FIG. 1 in greater detail.
FIG. 5 illustrates an example embodiment of a composition system operation to launch a virtual surface.
FIG. 6 depicts an example embodiment showing a composition system prepared surface for updating.
FIG. 7 depicts an example embodiment of the operation of a composition system using the backing list of FIG. 6.
FIG. 8 depicts an example embodiment showing operation of a composition system using trenches.
FIG. 9 depicts an example implementation showing active area management implemented by a composition system.
FIG. 10 depicts an example implementation showing the operation of a composition system to combine surfaces using a push down technique.
FIG. 11 depicts an example embodiment showing the operation of a composition system to combine active areas into a new surface.
FIG. 12 depicts an example implementation showing the operation of a composition system using a mesh.
FIG. 13 depicts an example embodiment showing the operation of a composition system in relation to occlusion.
FIG. 14 is a flow chart describing a process in an example embodiment for allocating a size for a surface in which to render data.
FIG. 15 is a flow chart describing a process in an example embodiment in which a composition system tracks an active area.
FIG. 16 is a flow diagram depicting a procedure in an example implementation in which a backup list is used to manage a surface.
FIG. 17 is a flow chart describing a process in an example embodiment for resizing a surface based on occlusion.
FIG. 18 is a flow diagram depicting a procedure in an example implementation that includes a compaction (compact) technique that pushes an active area down from one surface to another surface.
FIG. 19 is a flow diagram depicting a procedure in an example implementation that includes a compaction technique that combines active areas into a new surface.
FIG. 20 is a flow diagram depicting a procedure in an example implementation in which a composition system invokes a driver using a mesh to render a surface using the mesh.
FIG. 21 shows an example system including different components in an example device that can be implemented as any of the types of computing devices described with reference to FIGS. 1-20 to implement embodiments of the techniques described herein.
Detailed Description
Overview
Virtual surfaces can be used to assign and manage surfaces for rendering visual material (visuals). For example, virtual surfaces may be used to overcome hardware limitations, such as managing rendering for a web page that is larger than the memory allocated by the hardware for visual rendering, examples of which are managing large web pages, immersive applications, and so forth.
Virtual surface synthesis and update techniques are described herein. In one or more implementations, techniques are described for managing surfaces for rendering. This includes techniques to support initialization and batch processing of: the updates described further below in conjunction with fig. 4 and 5, the use updates and lookaside lists described in conjunction with fig. 6 and 7, the use trenches described in conjunction with fig. 8, the blending and BLT operations described in conjunction with fig. 9, surface optimization techniques such as push-down described in conjunction with fig. 10 and enumeration and clumping described in conjunction with fig. 11, the mesh use described in conjunction with fig. 12, and the occlusion management technique described in conjunction with fig. 13.
In the discussion that follows, an exemplary environment is first described that is operable to perform the virtual surface techniques described herein. Example procedures are then described which can operate in the example environment as well as other environments. Again, the exemplary environment is not limited to performing the exemplary processes.
Example Environment
FIG. 1 illustrates an operating environment in accordance with one or more embodiments generally at 100. The environment 100 includes a computing device 102 having a processing system 104, where the processing system 104 may include one or more processors, an example of computer-readable storage media illustrated as memory 106, an operating system 108, and one or more applications 110. Computing device 102 may be implemented as any suitable computing device, such as, but not limited to, a desktop computer, a portable computer, a handheld computer such as a Personal Digital Assistant (PDA), a mobile phone, a tablet computer, and so forth. Different examples of computing device 102 are shown and described below in FIG. 21.
The computing device 102 also includes an operating system 108 that is illustrated as running on the processing system 104 and may be stored in the memory 106. The computing device 102 also includes an application 110 that is illustrated as being stored in the memory 106 and that may also be run on the processing system 104. The operating system 108 represents functionality of the computing device 102 that may abstract (abstrate) underlying hardware and software resources for use by the applications 110. For example, the operating system 108 may abstract functionality of how data is displayed on the display device 112, while the application 110 does not have to "know" how such display is implemented. A variety of other examples are also contemplated, such as abstracting resources of the processing system 104 and memory 106 of the computing device 102, network resources, and so forth.
Computing device 102 is also illustrated as including a composition system 114. Although shown as part of the operating system 108, the composition system 114 may be implemented in a number of ways, such as a stand-alone module, as a separate application, as part of the computing device 102 own hardware (e.g., an SOC or ASIC), and so forth. The composition system 114 may use a variety of techniques to render visual material, such as by way of one or more Application Programming Interfaces (APIs) 116 to expose functionality available to the application 110 for rendering visual material.
For example, one such technique may be based on an object named swap chain (swap chain), where the object may utilize a buffer array representing a bitmap. By way of example, one of the buffers may be used at any one time for rendering data on display device 112, and thus may be referred to as a "onscreen buffer" or a "front buffer". Other buffers may then be made available to the application 110 to perform off-screen rasterization (off-screen buffer) processing, and thus may be referred to as "off-screen buffers" or "back buffers".
The application 110 may change the content displayed on the display device 112 in a variety of ways. In a first such technique, the application 110 may redraw one of the back buffers and "flip" the content, e.g., using a pointer to make one of the off-screen buffers a screen buffer, and vice versa.
In a second such technique, different size buffers may be used. For example, the composition system 114 may use the first buffer as a screen buffer. The composition system 114 may also use a second buffer that is smaller than the first buffer as an off-screen buffer. Thus, the update may be rasterized to the second buffer when content is to be updated. The update may then be copied to a screen buffer, for example using BLT. In this way, resources of the computing device 102 may be conserved.
The composition system 114 may also be configured to support virtual surface techniques. These techniques may be used to help developers of the application 110 reduce resources in the computing device 102 for rendering visual material. This may include using the virtual surface 118, thereby enabling the application 110 to split the visual data surface into tiles (tiles) and then render those tiles ahead of time. Other implementations are also envisioned that do not use a tiled dividing surface (e.g., application 110 sized), as described further below.
The virtual surface 118 may be configured as a collection of one or more logical surfaces 120. Logical surface 120 represents an individual surface as seen by application 110 and may be associated with one or more visuals. For example, the logic surface 120 may be configured to be tiles of a fixed size, and where multiple tiles may be arranged in a fixed grid, although it will be readily apparent that numerous other instances are also contemplated in which fixed-size tiles are not used. For example, the size of a tile may be dictated by the application wishing to render the visual, and thus, in this example, the size of the tile may be set by the application itself, which is also referred to as a "chunk" in the following discussion.
The virtual surface 118 may be used to represent a larger area than the area represented by the texture. For example, the application 110 may specify the virtual texture size at creation time. The size establishes the boundary of the virtual surface 118. The surface may be associated with one or more visuals. In one or more embodiments, the virtual surface is not supported by the actual allocation when the surface is initially initialized. In other words, the virtual surface 118 may not "hold bits" at initialization, but it may hold bits at some later point in time, such as when an allocation is performed.
In the following discussion, visual material may refer to a basic composition element. For example, the visual material may contain a bitmap and associated composition metadata for processing by the composition system 114. The bitmap of the visual may be associated with a swap chain (e.g., for dynamic content such as video) or an atlas surface (e.g., for semi-dynamic content). Both presentation models can be supported in a single visual data tree supported by composition system 114.
For semi-dynamic content, the atlas may serve as an updated model of the visual profile bitmap and may refer to an aggregate layer that may contain multiple layers to be rendered, however, a single layer is also contemplated. Visual material and its attribute operations (e.g., offset, transform, effect, etc.) and methods for updating an atlas-based bitmap of visual material (beginndraw, suspennddraw, resumeddraw, EndDraw) are exposed via the application programming interface 116, whereas for the application 110, the packing/compaction/management of atlas layer sizes, tile sizes, bitmap updates may be hidden.
A swap chain refers to a series of buffers that can "flip" to the screen in succession, as an example, by changing pointers. Accordingly, the flip mode is a mode for changing the off-screen buffer into the screen buffer by using a swap chain technique, for example, by using a swap point between the off-screen and the screen buffers. However, blt mode refers to a technique whereby the runtime of the composition system 114 issues a "blt" (e.g., bit block image transfer) from the off-screen buffer to the screen buffer, where the blt can be used to update the screen buffer.
As previously described, in one or more embodiments, the virtual surface 118 is not supported by actual allocation at the time it is initially initialized. In other words, it does not "own any bits". Once the application 110 begins to update the surface, the composition system 114 may perform tile (i.e., composite surface object) assignment. The application 110 may update the virtual surface 118 via a variety of operations, such as begin draw, suspend draw, resume draw, and end draw API calls for the respective operations. In one or more embodiments, the mapping may be determined by an internal algorithm of the composition system 114 and not seen by the application 110.
Further, the composition system 114 may expose functionality via the API 116 to enable the application 110 to resize the virtual surface 118 and to tailor the virtual surface 118. For example, the resizing operation may be used to change the boundary of the virtual surface 118. This means that new updates and/or assignments will fall within the boundaries of the new sizing. The application 110 may also use this method to inform the composition system 114 that the region of the virtual surface 118 is no longer used (e.g., no longer valid) and thus is recoverable. If the resizing process results in a region shrinking, the application 110 will no longer be able to update the region outside the new boundary through the management of the composition system 114.
FIG. 2 depicts an exemplary implementation 200 of resizing a virtual surface. The first and second stages 202, 204, respectively, are used in the illustrated example to show the adjustment of a 3x3 virtual surface to 2x 2. In the second stage 204, the cross-hatched areas represent tiles that are discarded as part of the resizing operation. As previously described, the composition system 114 may then reclaim the memory 106 used to store the tiles. After resizing, the application 110 will no longer be able to update the discarded regions (i.e., the cross-hatched regions) if the virtual surface is not resized again first.
Further, in one or more implementations, composition system 114 may initiate the resizing operation in response to receiving an indication of the operation. For example, upon receiving the indication, composition system 114 may implement the resize process without waiting for the application to call "commit". As an example, an application may call "Resize (0, 0)", "Resize (INT _ MAX )", and "Commit ()". In this example, the application 110 has caused the content to be discarded when it was first resized, so even if it was committed before "Commit ()", the second resizing process would not be effective. In this case, the display device 112 does not display content since there is no content available for display.
The trimming operation may be used to describe the virtual atlas region requested by application 110 to synthesis system 114. Thus, the trimming operation may be performed without resizing the boundary of the virtual surface 118. However, it does inform the composition engine 114 which logical surfaces are currently to be allocated, examples of which will be described in connection with the following figures.
FIG. 3 depicts an example implementation 300 showing interaction between an application and a logical surface of a virtual surface. This example is also illustrated with first and second stages 302, 304. In this example, the viewport (viewport) 306 of the application is displayed in both the first and second stages 302, 304. Accordingly, in the first stage 302, the first six tiles of the virtual surface (containing 15 tiles) within the viewport 306 are initially rendered by the application, wherein the tiles are shown with cross-hatching.
Upon scrolling the page represented by the virtual surface, the application may now cause the last six tiles to be rendered, as shown in the second stage 304. Accordingly, the application 110 may invoke "trim" to indicate that the region defined by the last six tiles is currently being used and thus that the remaining content is not currently being used. The composition system 114 may then choose to recycle the logical surface 506 that represented the first six tiles at the beginning.
The composition system 114 may also create and delete logical (i.e., physical) and virtual surfaces, as well as update individual surfaces, by exposing the API 116 of FIG. 1. The composition system 114 can force the region to be updated by the application 110 to avoid extraneous visuals when rendering outside of the updateable region.
Initialization and batch processing
FIG. 4 depicts an exemplary implementation 400 showing the composition system 114 of FIG. 1 in greater detail. In today's computing world, users tend to find themselves viewing and browsing through a large amount of rich content, and at any one time, the content is not fully displayed by the display device as a whole. Examples of such content include: complex and dynamic web pages, modern applications with large lists of live items/groups of photos, music or other live (live) content, or views of large documents.
User interfaces such as touch and image capture based operations allow a user to scroll, pan, and quickly zoom numerous user interface displays on a tablet, phone, large television/projector, and the like. In most cases, if the entire content is pre-rendered and kept up-to-date as it is active and changed, its cost can be prohibitively high and indeed may not even be supported by the device hardware. Instead, the portion of the content entering the viewport can be intelligently rendered and cached, such as speculatively rendering the content ahead of time before user operations bring the content into the viewport, and dropping it from the cache when the viewport exits, thereby reducing the resources used as described above.
The composition system 114 may perform the composition and rendering separately in order to provide the desired response to the user. This is illustrated by the composition system 114 including a composition engine 402, a controller 404, and a renderer 406. In one or more embodiments, these components of the composition system 114 may be run asynchronously. In this way, the controller 404 and composition engine 402, which are responsive to user input, may pan/zoom the pre-rendered content while the renderer 406 continues to render.
As previously described, the composition system 114 may use one or more virtual surfaces 118. The use of the virtual surface 118 allows for the caching and compositing of content that has already been rendered. The process of the renderer 406 updating and trimming the areas on the virtual surface 118 may be performed based on speculative rendering strategies, while the controller 404 and composition engine 402 are used to transform the virtual surface 118. The transformation may be performed based on user input to generate a user interface update based on a region of the virtual surface 118 having rendered content and that is in the viewport. The composition engine 402 may be configured to compose multiple virtual surfaces 118 and/or visuals at once.
In one or more implementations, the composition system 114 may be configured to use the logical surface 120 as fixed or mixed-size tiles to be used as a foreground buffer for composition. When the renderer 406 wishes to update a portion of the virtual surface 118, the renderer 406 may render either a separate updated surface or a tile surface directly. If a separate update surface is used, the content is copied from the update surface to the foreground buffer tile at the end of rendering. The tile may then be released when the renderer 406 trims off the valid content from the tile.
However, this embodiment may result in structural tearing because the changed content is composited on a screen with expired content. Furthermore, seams between region tiles or chunks updated on a virtual surface may result from trench and sample processing (e.g., bilinear) or T-junctions, and these seams may result in excessive CPU and GPU processing for trenches, multiple overlapping updates, and complex active regions. Further, problems with excessive memory usage may be encountered due to dynamic content changes or content being manipulated by the user. With fixed/mixed size surfaces per tile, there may be memory wastage issues for larger size tiles due to unused portions of the tile, CPU/GPU wastage issues for smaller tiles due to rendering/processing updates and rendering of the tiles at composition time, and CPU/GPU copy costs from update buffer to foreground buffer if a separate update buffer is used. Thus, a tradeoff may be performed between a variety of considerations in implementing the composition system 114.
These considerations may include the following set of principles for user quality of experience and performance when manipulating rich and/or dynamic content that does not fit into the viewport. The first such principle is called visual response. This means that the virtual surface 118 may be configured to feel like a real surface when manipulated by the user's "fingertips" and user. This may be supported by configuring the composition system 114 to respond to and track operations without appreciable hysteresis. Separation of the renderer 406 from the controller 404 and the composition engine 402 may be used to support this principle in a robust manner.
A second such principle involves visual coherence. In this example, the content on the display device 112 does not show artifacts (artifacts) that interfere with the user's immersion or confidence while manipulating the surface and updating the dynamic content (e.g., animation) inside it. For example, content may be displayed without seams, visual tears or corruption, certain portions of the user interface do not lag behind other portions to which they are attached, and so forth.
A third principle involves visual integrity. If the user interface is visually complete, the user will rarely see the filler/placeholder pattern (e.g., checkerboard) covering certain portions of the display device 112, and if so, the display will be limited to a relatively short duration. Furthermore, surface content updates are without significant lag, however, this may not be guaranteed, for example, for unlimited rich content across zoom levels on low power devices. For example, the more optimized and efficient renderer 406 updates virtual surface 118 and composition engine 402 composes the surface, the more bandwidth renderer 406 has to perform rendering further speculatively in advance to achieve additional visual integrity.
The fourth principle consists of a live surface. In this principle, animations, videos, and other dynamic content are continuously played and run during operation and are not intermittent. The renderer 406 may be implemented if it achieves visual integrity and has bandwidth to implement a live surface. This may be supported by efficiently updating and compositing the virtual surface 118.
The composition system 114 may be configured to weigh these principles. In this way, a comprehensive solution can be achieved that supports not only visual correctness and coherence, but also responses for managing and composing virtual surface updates, whereby the renderer 406 has sufficient bandwidth to ensure visual integrity as well as live surfaces.
FIG. 5 illustrates an example implementation 500 of the operation of the composition system 114 for launching the virtual surface 118. This embodiment is illustrated with first and second stages 502, 504. In a first stage 502, the application 110 requests a surface size to render a user interface that may be associated with one or more visuals. As previously described, the virtual surface 118 is first initialized (e.g., created) so that it is not supported by actual allocation and, therefore, does not "own bits" at initialization.
The application 110 may then specify visual material to render to the virtual surface 118. Accordingly, the composition engine 402 may compose these visuals for rendering by the renderer 406 to the virtual surface 118, such as the illustrated car. This process may be performed by using tiles or "chunks" where the size of the allocation is specified by the application.
In the second stage 504, the renderer 406 may receive an instruction to update an area of the virtual surface 118, such as a rectangular area on the surface. The interface between renderer 406 and composition engine 402 enables renderer 406 to implement multiple updates 506 on numerous virtual surfaces 118 (e.g., the updates may include finishing instructions, changing visuals, creating or removing visuals, etc.), as well as transform updates on those visuals that may use these surfaces as content. Examples of updates 506 include a visual configured as a cursor and a visual configured as a user-selectable button.
In one embodiment, a "commit" operation may be invoked, such that renderer 406 may render multiple updates 506, for example, as a batch. In this way, the composition system 114 may avoid rendering incomplete updates. This allows the renderer 406 to have consistent and consistent visual data displayed by the display device 112 according to the visual consistency principles.
In addition, the controller 404 that processes the user input may directly update the transformation (e.g., for panning or zooming) on the visual material on the composition engine 402 based on the user operation without going through the renderer 406. By way of example, this aspect may provide a visual response to handle animations or other state changes of dynamic content and/or rasterize complex content on thin devices with limited processing resources, even if the renderer 406 is occupied for a relatively long period of time.
Embodiments of the virtual surface 118 may include providing the renderer 406 with a surface and an offset into which the renderer 406 can render. The composition engine 402 may then "flip" the surface as the composition engine 402 picks up and processes the entire batch of updates that have been submitted to the renderer 406. If the renderer 406 uses a separate update surface to render the updates, the copy operation that would otherwise be performed can be eliminated by using this process.
In addition, the flipping process also allows the composition engine 402 to ensure that each update 506 that the renderer 406 generates in a single batch (e.g., via a commit operation) arrives at the display device 112 as a whole. Thus, the composition system 114 may avoid local update processing.
Update and backup lists
FIG. 6 depicts an example implementation 600 showing a surface prepared for updating by the composition system 114. The composition system 114 may use a number of different techniques to prepare the surface for updating. In the first case, composition system 114 may receive a request from an application to allocate a region to perform an update, where the region is shown as first rectangle 602 in the illustrated example.
In response to the request, composition system 114 may allocate a larger area than the requested area, which is shown as second rectangle 604 that includes requested first rectangle 602. Thus, if a slightly different size update is subsequently received, the process will allow reuse of the previously allocated surface.
For example, the composition system 114 may maintain a lookaside list 606 of surfaces 608 previously assigned by the composition system 114. This may be used by composition system 114 to "bin" memory 106 to reuse surface 608 and the "chunks" of surface 608.
By way of example, these surfaces 608 may be maintained in memory 106 of the computing device 102 for surfaces that are no longer in use. Thus, once the composition system 114 receives a request to provide a surface for updating, the composition system 114 may first check the backing list 606 to determine whether a previously allocated surface 608 corresponding to the request is available in the memory 106 of the computing device 102. If so, the compositing system 114 may utilize these surfaces, thereby increasing the overall efficiency of the system by not allocating new surfaces. Further, by assigning a size larger than requested to surfaces (e.g., having more pixels) as previously described, the likelihood that these surfaces 608 are relevant to subsequent updates may be increased.
For example, if updates of slightly different sizes are received over a period of time, e.g., if the next update is for an area that is a few more pixels wide or high, then the process will allow more reuse of the previously allocated surface 608. Thus, instead of assigning a new surface, the composition system 114 may locate the relevant surface using the lookaside list 666 for the previously provided surface. It should be noted that trimming and other updates to the surface portion are also available.
This can be tracked by zone based on confirmed batches. If the update fits into an available portion of the existing surface 608 that also has other valid content, the surface can be used again. This may also reduce the cost of the composite side by avoiding rendering from multiple different surfaces, as each such conversion incurs setup costs. The size of the lookaside list 606 (e.g., the number of surfaces 608 maintained in the list and in memory of the computing device 102) may be set based on historical peak usage or a variety of other factors.
FIG. 7 depicts an example implementation 700 of the operation of the composition system 114 using the backing list 606 of FIG. 6. This embodiment is shown with first, second, and third stages 702, 704, 706. In the first stage 702, the surface 708 will be allocated for rendering by the renderer 406. Control of the surface 708 may then be given to the renderer 406 to perform the rendering process.
During this rendering, another surface 710 may be allocated to perform the update in the second stage 704. In this example, the other surface 710 is contained within the same display area as the surface 708 rendered by the renderer 406. Thus, surface 710 may be allocated and populated (e.g., rendered) when surface 708 is rendered. Surface 710 may then be passed to renderer 406 for rendering, which may be passed in response to a submit command as previously described, as an example.
In the third stage 706, another update to update the user interface may be received. In this example, the composition system 114 determines that the update contains a previously assigned surface, such as surface 708 from the first stage 702, by using the backing list 606 of FIG. 6. Accordingly, the composition system 114 may use the already-assigned surface 708 to contain the updates 712. In this way, the surface 708 may be used without reassigning a new surface, thereby conserving resources of the computing device 102. Moreover, a number of other examples are also conceivable.
Groove
FIG. 8 depicts an example implementation 800 showing the operation of the composition system 114 using a trench. One problem in maintaining visual correctness relates to missing trenches. For example, the virtual surface may be positioned or scaled to the sub-pixel offset due to scrolling or the like. Accordingly, the value of the pixel to be displayed by the display device 112 is determined from neighboring pixels, e.g., using bilinear sampling.
However, neighboring pixels in the update 802 that are located at an edge 804 of the update 802 may have values based on the error information. For example, if neighboring pixels that are outside of update 802 contain "garbage" (e.g., from other updates), the rasterizer may sample from these pixels and thereby produce pixels with bad values that appear to be seams when displayed by display device 112.
One method for dealing with this problem is to copy rows or columns of pixels that may be on an edge in another tile/clump surface 805 to neighboring pixels in the surface of the newly assigned update 802. However, the cost of these additional copy processes has proven to be prohibitively high for the processing resources of the computing device, such as both the CPU and GPU resources of the computing device 102.
Accordingly, in one or more embodiments, the edge of the update 802 will align with the surface edge. A clamping operation is then used when sampling "neighboring" pixels that fall outside the surface, which operation will cause the rasterizer to use the values of the pixels at the edge of the surface. This can be used to produce a reasonable trade-off between cost and visual correctness, which can appear sufficiently correct for the user even though the result produced may not be completely correct visually. In one or more embodiments, the trench itself is not updated.
In some cases, the update edge may not be aligned with the surface edge. The reason for this may be that a larger surface is allocated than is updated. In this case, the rows/columns of pixels at the update edge on the same surface may be copied to neighboring pixels in order to achieve a similar effect as the clamping behavior.
Also, during trimming and updating, the trench is not updated with potentially new pixels that may be rendered in one or more embodiments because the trench contains pixels that were previously valid and displayed with the currently valid pixels. This supports a tradeoff between accuracy and performance, which typically results in minimal visual artifacts that can interfere with the user when viewing.
Hybrid and BLT
FIG. 9 depicts an example implementation 900 showing management of active areas by the composition system 114. As previously described, the virtual surface 118 may include portions that are valid and invalid for updates. For the illustrated example of the virtual surface 118, the update may include a cursor within the virtual surface 118 and not a car, for example. The cursor may thus be used to define an active area of the virtual surface 118, as opposed to other areas of the virtual surface 118. By tracking these regions of the virtual surface 118 as well as other surfaces, the composition system 114 can utilize a variety of optimization processes.
For example, a technique is described herein for dividing an area to be rendered from a surface into two parts, blend and BLT. This technique can be used to address situations where updates are small and the active area created on a virtual surface is relatively complex, for example resulting in a complex mesh with numerous small source surfaces.
A surface is "blended" if it is "pre-multiplied" or transparent (and is not "opaque" or set to ignore the alpha value). If the renderer does not provide content, then a larger rectangular shape can be blended with the "cleaned" and/or fully transparent pixels in this manner. In some cases, this process becomes more optimal than a process using a complex mesh outlining each path/edge of a complex shape and rasterization.
This method can also be used for trenches when the active area is very complex for opaque surfaces. For example, the inner portion may be BLT, but the pixels around the edge are blended, thereby clearing the adjacent pixels. Thus, accurate values will be available when the rasterizer samples from these pixels. In one or more embodiments, this technique will be used for the edges of the virtual surface 118, rather than the interior edges between the map clumps and the surface that make up the virtual surface.
Bits may be copied and locally cleaned to ensure that the assigned clump surface is aligned with the tile size and that content from the previous surface that owns the tile is moved to the new surface. In one or more embodiments, this process is not performed for portions of the renderer 406 to be updated, such as the update rectangle at the middle position shown in FIG. 7. If the surface is opaque, pixels at the edges may be made opaque by "blending" after the update, i.e., full opacity is achieved in the alpha channel of those pixels.
Each of these tasks of copying, cleaning, and opacification may be performed using "regions" of non-overlapping rectangular stripes. These regions may either intersect, form a union, or subtract. Further, non-overlapping rectangular stripes that compose these regions are enumerated. This allows for efficient merging of different rectangles and regions into a single region, as well as extracting the resulting optimal set of rectangles. For example, Win32 HRGN is a GDI configuration that can be used to take advantage of these facilities. Rather than determining what is to be done with each tile individually, these operations are used to identify groups of rectangles that have been merged and optimized to be performed, such as cleaning or copying. This process can be used to achieve significant efficiency in both the CPU and GPU performing these tasks, and also allows the tile/alignment size to be reduced to relatively small values, such as 32x32 or 16x16, thereby reducing the waste previously described.
The trimming request from the renderer 406 may be handled differently based on the complexity of the active area. In a typical case, the active area of the tile clumps/surfaces may be updated in accordance with the trim request. However, if the active area is very complex and the BLT/hybrid technique is being used, additional operations may be performed. For example, since some portions of the active area are now located at the edges of the area, these portions of the active area may be blended to make them opaque. Another way to deal with this is to create new clumps for those tiles from which the active portion was removed. However, the tile may continue to retain some valid portions. For these tiles, the remaining valid portions may be copied from the existing surface, rendered opaque, and trimmed away portions may be cleared. These new clumps may be committed when the renderer 406 commits the entire batch of updates due to, for example, a commit operation. This operation may be optimized with rectangular stripe regions, but other examples are also contemplated.
When the renderer 406 submits a set of updates, the finishing and visual transforms (e.g., the generated tile clumps/surfaces and their collection of valid regions) may be passed to the composition engine 402. These updates may be passed with corresponding tokens that may be used by composition engine 402 to ensure that any outstanding CPU/GPU work for rasterization on these surfaces is completed. At this time, additional techniques may be used to further increase efficiency, examples of which are described in the following sections.
Push down
FIG. 10 depicts an example implementation 1000 showing the operation of the composition system 114 using a push down technique to combine surfaces. In this example, the composition system 114 has performed surface assignment 1002 to display visual material, which is shown as boxes in the figure with diagonal markings. Another surface assignment 1004 is then made to perform the update, which is displayed as a white box put together with the slanted bar box.
By tracking the active area of the surface with the composition system 114, the allocations may be combined to increase resource usage. For example, rendering from multiple surfaces may be more resource intensive than rendering from a single surface.
In the illustrated example, the active portion of surface allocation 1004 is "pushed down" to surface allocation 1002. This is shown with a dashed box indicating that the active area from surface allocation 1004 is now contained in surface allocation 1002. After the push down, the surface allocation 1004 containing the update may be released, thereby freeing up portions of the memory 106 of the computing device 102. Thus, the technique may be used to combine surfaces that have been combined without creating a new surface assignment by utilizing the assignment of one of the surfaces.
For example, in some cases, composition system 114 may face large updates that overlap in current or previous batch updates. This may result in the allocation of multiple surfaces containing relatively small active areas. Thus, the compositing system 114 may dispense large surfaces, but a relatively small active area may prevent the release of such surfaces.
However, by "pushing down" the active area from a first surface (e.g., a newer smaller surface) to a second surface (e.g., an older larger surface), the active area from the first surface can be removed. Doing so will allow the first surface to be freed, thereby freeing memory without involving additional surface allocations, as well as reducing the number of surface allocations managed by the composition system 114. In this manner, renderer 406 may be assigned the task of rendering fewer surfaces, thereby increasing the efficiency of composition system 114. Other techniques for making new surface assignments are also contemplated, examples of which are described in the following sections.
Enumeration and clumping
FIG. 11 depicts an example implementation 1100 showing the operation of the composition system 114 to merge active areas into a new surface. As previously described, the composition system 114 may be configured to track the effective areas of the surface assignments, examples of which are shown as 1102(1), 1102(2), and 1102(n) with corresponding effective areas. The size of the active area relative to the surface containing the area may decrease over time, for example, due to updates from other surfaces, and so forth. Accordingly, the composition system 114 may be configured to combine the active areas from the surface assignments 1102(1) - (1102 (n)) into one or more new surface assignments 1104.
For example, the composition system 114 may be configured to address surface allocation and composition by reducing the number of surfaces set and rendered as sources to compose a display on the display device 112. This process may be performed by enumerating an optimal set of rectangles in the active area of the entire virtual surface. A clump may then be created for each such rectangle. If this process results in a large number of smaller rectangles being generated, the hybrid/BLT technique described above may be used. In this way, a larger rectangle can be obtained that is to be properly synthesized by the synthesis engine 402 and that has pixel areas cleaned up.
For example, when the composition engine 402 receives a batch of updates, the engine may first determine the "dirty" portion of the virtual surface and the visual that constitutes the display tree to be updated. This may be performed to include explicit computation of dirty regions and passing the dirty regions from the update and pruning processes to the compositor, as an example, even though the underlying surface or "clump" may change (e.g., push down or re-clump), the active region with the same content may be left to later processing, thus not generating a new dirty region. These rectangles describing the active area may be explicitly passed in each update/trim operation. In one or more embodiments, dirty regions may be reduced to produce a smaller number of larger rectangles, thereby avoiding significant overhead in setting up and running multiple smaller rendering operations. One technique for performing this process is to allow a maximum number of dirty rectangles. When new dirty rectangles are encountered, these rectangles may be added to the list or merged (e.g., formed into a union) with the rectangles that produce the smallest overall area boost.
Mesh hole
FIG. 12 depicts an example implementation 1200 showing the operation of the composition system 114 using a mesh. A mesh (e.g., a list of points) may include multiple visuals for which a single draw call to a GPU driver may be performed. In this way, the number of draw calls to the driver may be reduced, thereby avoiding the overhead involved with each call.
The composition engine 402 has multiple options to compose the clumps/surfaces of the virtual surface 118. For example, since the composition engine 402 knows the valid regions of each cluster, the composition engine 402 may start with skipping those clusters that do not overlap the dirty regions to be updated. If the visuals contained in the virtual surface 118 are pixel-aligned, a translation transformation is performed without using the trench technique described above. This allows the use of a simple BLT/blend for each rectangle in the cluster.
Instead of performing these operations one at a time, the composition engine 402 may create a triangular mesh from the set of rectangles, and may cause the surface to be rendered using the mesh. For example, the composition system 114 may examine a set of rectangles 1202 having valid regions. Triangular cells 1204 may then be created for this set of rectangles by dividing each rectangle into two triangles. However, from these rectangles it is possible to create T-junctions. The T-junction may cause the triangular mesh 1204 to be rasterized with seams, for example, due to floating point or rounding errors. Accordingly, the synthesis system 114 may instead process the set of rectangles to form a triangular mesh 1206 of non-overlapping rectangles that do not contain T-junctions.
If the rectangle of the clump does not change, the resulting mesh can be buffered on the composite frame and the mesh can be used again. If there is a non-pixel-aligned transform, but the transform contains only translation, the composition engine 402 can still generate meshes for each clump itself and render each clump. However, if there is a more complex transformation, the composition engine 402 may process the set of rectangles to avoid T-junctions, thereby ensuring seamless correct rasterization processing.
To this end, each clump may register a respective set of rectangles, wherein the grid generator objects are managed by the composition system 114. As each coordinate is examined, the mesh generator function of composition system 114 may add one or more additional vertices on the registered edges. Each registered edge may also have an existing vertex added to itself in the scope. The result is a set of rectangles for each clump with additional vertices. These rectangles can then be decomposed into a set of non-overlapping triangles by using these vertices. Thus, as shown by triangular mesh 1206, in the case of non-simple transformations, clumps can be reproduced with these resulting mesh without T-junctions.
Shielding
FIG. 13 depicts an example implementation 1300 showing the operation of the composition system 114 in relation to occlusion. Even though each clump may have instructions to blend some portion of its surface with other portions of the BLT, for opaque virtual surfaces, the composition system 114 knows the active and opaque regions on each clump.
For occlusions, these areas may accumulate across the entire virtual surface and be available to the composition engine 402 to perform occlusion detection. In one or more implementations, the composition engine 402 may enumerate the occluded rectangles that have been registered, identifying portions that are occluded by opaque visuals that are closer in z-axis order to the user for display on the display device 112.
However, breaking a rectangle into complex shapes by occlusion passing can be expensive. To ensure that the non-overlapping rectangular stripes that make up a region completely occlude rectangles that will be occluded by the entire region, the composition system 114 may use a rectangle containment and intersection technique.
An example of this technique is shown in the illustrated embodiment 1300 of fig. 13, where the example is shown by the first and second stages 1302, 1304. In a first stage 1302, first and second rectangles 1306, 1308 are to be composited by the composition engine 402. However, the composition engine 402 may determine that the portion 1310 of the first rectangle 1306 is occluded by the second rectangle 1308.
Accordingly, if the occlusion rectangle blurs the entire edge, the composition engine 402 may be configured to reduce the rectangle that has been checked, so that the result remains a single rectangle that has been reduced. An example of this process is shown in the second stage 1306 where the first rectangle 1306 is reduced, thereby not containing the portion 1310 that is occluded by the second rectangle 1308. Thus, the edges of the second rectangle 1308 may be used to define new edges of the first rectangle 1306, thereby conserving resources of the computing device 102. Moreover, a variety of other examples are likewise contemplated.
Exemplary procedure
The following discussion describes techniques that may be implemented using the aforementioned systems and devices. Aspects of each of the procedures may be implemented in hardware, firmware, software, or a combination thereof. The processes are shown as a set of blocks and the blocks specify operations performed by one or more devices, however, the processes are not necessarily limited to the order in which the blocks shown perform the operations. In some portions of the following discussion, reference is made to the environment 100 of FIG. 1 and the systems and example embodiments of FIGS. 2-13.
FIG. 14 depicts a procedure 1400 in an example implementation in which a size is allocated for a surface in which data is to be rendered. The composition system receives a request to allocate a surface in which one or more visuals are to be rendered, and the request specifies a size of the surface (block 1402). As an example, the request may originate from an application that is to start a "reproduction bit". In one or more embodiments, the surface may have been initialized but not yet allocated when the request is received, and thus the surface does not "own bits" when the request is received.
In response to receiving the request, the surface is allocated by the composition system to have a size that is larger than the size of the request to render the one or more visuals (block 1404). As previously described, composition system 114 may be configured to "stock storage" to prompt reuse of allocated surfaces that are no longer valid. By making the surface larger than the application-requested surface, the composition system 114 may increase the likelihood of later re-using the surface.
FIG. 15 depicts a procedure 1500 in an example implementation in which a composition system tracks an active area. A surface containing visual material that is available for display by a display device is managed by a composition system (block 1502). For example, the surface may be configured as a virtual surface as previously described.
The active area is tracked inside the surface to be displayed by the display device (block 1504). For example, the surface may initially be configured to update a portion of the display. However, other surfaces may further update the portion of the display that has been updated over time. Accordingly, for display, certain portions of the surface may remain active for display while other portions are inactive. The composition system 114 may be configured to track such effectiveness, which may be used to support a variety of different functions, such as occlusion management, surface sizing, surface compaction, and so forth, as further described elsewhere in this discussion.
FIG. 16 depicts a procedure 1600 in an example implementation in which a backup list is used to manage a surface. The composition system receives a request to allocate a surface in which to render one or more visuals (block 1602). As before, the application 110 may make the request as a call through one or more APIs 116 of the composition system 114.
The composition system checks the lookaside list to determine if a surface exists in the memory of the computing device that is available for allocation, wherein the surface corresponds to the received request and does not contain valid visual material for display by a display device of the computing device (block 1604). For example, the lookaside list may reference surfaces that are allocated in memory but no longer have valid portions for reasons such as subsequently received updates.
In response to determining that the surface is available for inspection, the determined surface is made available for rendering one or more visuals (block 1606). For example, as previously described, the determined surface may have been assigned a larger size than the requested size, and is therefore associated with a subsequent update. Moreover, a variety of other examples are also contemplated.
FIG. 17 depicts a procedure 1700 in an example implementation in which a surface is resized based on occlusions. It is determined that a portion of a surface is occluded by another surface to be displayed by the display device (block 1702). For example, the composition engine 402 may determine a Z-axis order for displaying a surface, and may determine that at least a portion of other surfaces are to be rendered over the portion of the surface.
The portion is to be removed from the surface (block 1704). This may be performed in a number of ways, for example by using the edges of the other surface to define the edge of the surface to be reduced, thereby defining at least one new edge of the surface.
The surface with the removed portion is rendered along with other surfaces (block 1706). In this way, rendering of those portions that have been removed from the surface may be avoided, thereby conserving resources of the computing device 102.
FIG. 18 depicts a process 1800 in an exemplary implementation in which the process describes a compaction technique that includes pushing down active areas from one surface to another. Active areas of a plurality of surfaces available to a composition system for rendering one or more visuals are tracked (block 1802). For example, the composition system 114 may determine which portions of the surface will be displayed and not displayed by the display device.
The composition system then determines that the first active area of the first surface may be included in the allocation of the second surface (block 1804). For example, the first surface may be configured as an update. Then, a subsequent update may be performed, wherein the update invalidates the updated portions other than the first valid region.
The first active area is then pushed down to be included as part of the second surface (block 1806). This may include copying the bits of the active area to the second surface. After copying, the first surface may then be released, whereby resources may be saved in keeping the separate surfaces, and the efficiency of the reproducing operation may be improved by using a smaller number of surfaces. Thus, no new surfaces are allocated in this example, thereby conserving resources of the computing device 102 in performing and maintaining the allocation. Other examples are also contemplated, examples of which are described below.
FIG. 19 depicts a procedure 1900 in an example implementation that describes a compaction technique that includes combining active areas into a new surface. . Active areas of surfaces available to the composition system for rendering one or more visuals are tracked (block 1902). As before, the composition system 114 may determine which portions of the multiple surfaces will be displayed and not displayed by the display device.
An allocation is then calculated for a new surface, where the new surface is available to contain active areas from multiple surfaces (block 1904). As an example, the new surface may be configured as a rectangle having a boundary for containing a plurality of active areas. The new surface may then be allocated to contain active areas from multiple surfaces (block 1906), and these active areas may then be copied to the new surface, thereby enabling the composition system 115 to free up the originating surface. A variety of other examples of surface compaction by the compositing system 114 are also contemplated.
FIG. 20 depicts a procedure 2000 in an example implementation in which the composition system 114 uses a mesh to invoke a driver to render a surface using the mesh. The mesh is formed from a set of rectangles that do not contain a T-junction (block 2002). As previously mentioned, the mesh may be formed, for example, as described by a set of triangles formed to avoid T-joints and thereby avoid complications encountered in reproducing these joints (e.g., seams). The driver is invoked to use the mesh rendering surface (block 2004), which may be, for example, a separate call to a driver that may be used to describe a graphical function (e.g., a GPU) for a plurality of rectangles having active areas for updating in the user interface. Thus, the mesh helps to avoid the use of callouts for each rectangle used to form the triangle of the mesh as described in the corresponding section above.
Exemplary System and device
FIG. 21 shows an exemplary system generally at 2100, wherein the system includes an exemplary computing device 2102, the computing device 2102 representing one or more computing systems and/or devices that can implement various techniques described herein. By way of example, the computing device 2102 may be a server of a service provider, a device associated with a client (e.g., a client device), a system on a chip, and/or any other suitable computing device or computing system. The computing device 2102 is illustrated as including the composition system 114 of FIG. 1.
The illustrated example computing device 2102 includes a processing system 2104, one or more computer-readable media 2106, and one or more I/O interfaces 2108 communicatively coupled to each other. Although not shown, the computing device 2102 may also include a system bus or other data and command transfer system that couples the various components to one another. A system bus can include any one or combination of different bus structures, such as a memory bus or memory controller, a peripheral bus, a universal serial bus, and/or a processor or local bus that utilizes any of a variety of bus architectures. Various other examples are also contemplated, such as control and data lines.
The processing system 2104 represents functionality that uses hardware to perform one or more operations. Accordingly, the processing system 2104 is illustrated as including hardware components 2110 that may be configured as processors, functional blocks, and so forth. This may include hardware implementations as application specific integrated circuits or other logic devices formed from one or more semiconductors. Hardware component 2110 is not limited by the materials from which it is formed or the processing mechanisms employed therein. For example, a processor may include one or more semiconductors and/or one or more transistors (e.g., electronic Integrated Circuits (ICs)). In this context, processor-executable instructions may be electronically-executable instructions.
The computer-readable storage medium 2106 is illustrated as including memory 2112. The memory/storage 2112 represents memory/storage capacity associated with one or more computer-readable media. The memory/storage component 2112 may include volatile media (such as Random Access Memory (RAM)) and/or nonvolatile media (such as Read Only Memory (ROM), flash memory, optical disks, magnetic disks, and so forth). The memory/storage component 2112 may include fixed media (e.g., RAM, ROM, a fixed hard drive, etc.) as well as removable media (e.g., flash memory, a removable hard drive, an optical disk, and so forth). The computer-readable medium 2106 may be configured in a variety of other ways as further described below.
One or more input/output interfaces 2108 represents functionality that enables a user to enter commands and information into computing device 2102 and also enables information to be presented to the user and/or other components or devices using different input/output devices. Examples of input devices include a keyboard, a cursor control device (e.g., a mouse), a microphone, a scanner, touch functionality (e.g., capacitive or other sensors configured to detect physical contact), a camera (e.g., a camera that can use visible wavelengths or non-visible wavelengths such as infrared frequencies to discern movement such as gestures that do not involve touch), and so forth. Examples of output devices include a display device (e.g., a monitor or projector), speakers, a printer, a network card, a haptic response device, and so forth. Thus, the computing device 2102 may be configured in a variety of ways, as further described below, to support user interaction.
The various techniques may be described herein in the general context of software, hardware components, or program modules. Generally, such modules include routines, programs, objects, elements, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The terms "module," "functionality," and "component" as used herein generally represent software, firmware, hardware, or a combination thereof. The features of the techniques described herein are platform-independent, meaning that the techniques may be implemented on a variety of commercial computing platforms having a variety of processors.
Implementations of the modules and techniques described may be stored on or transmitted across some form of computer readable media. Computer-readable media can include a variety of media that can be accessed by the computing device 2102. By way of example, and not limitation, computer-readable media may comprise "computer-readable storage media" and "computer-readable signal media".
"computer-readable storage medium" may refer to media and/or devices that enable permanent and/or non-transitory storage of information, as opposed to mere signal transmission, carrier waves, or the signal itself. Accordingly, computer-readable storage media refer to non-signal bearing media. The computer-readable storage media include hardware, such as volatile and nonvolatile, removable and non-removable media and/or storage devices, implemented in a method or technology suitable for storage of information such as computer-readable instructions, data structures, program modules, logic components/circuits, or other data. Examples of computer readable storage media may include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical storage, hard disks, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other storage devices, tangible media, or articles of manufacture suitable for storing the desired information and capable of being accessed by a computer.
"computer-readable signal medium" may refer to a signal-bearing medium configured to transmit instructions to the hardware of the computing device 2102, e.g., via a network. "signal media" may typically be comprised of computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as carrier wave, data signal, or other transport mechanism. Signal media also includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and also includes wireless media such as acoustic, RF, infrared and other wireless media.
As previously described, hardware component 2110 and computer-readable medium 2106 represent modules, programmable device logic, and/or fixed device logic implemented in hardware that may be used in some embodiments to implement at least some aspects of the techniques described herein, such as to execute one or more instructions. The hardware may include components of integrated circuits or systems on a chip, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), Complex Programmable Logic Devices (CPLDs), and other implementations in silicon or on other hardware. In this context, hardware may operate as a processing device to perform program tasks defined by instructions and/or logic contained in the hardware, as well as hardware to store instructions for execution (e.g., the computer-readable storage media described previously).
Combinations of the foregoing may also be used to implement different techniques described herein. Accordingly, software, hardware, or executable modules may be implemented as one or more instructions and/or logic embodied on some form of computer-readable storage medium and/or by one or more hardware components 2110. The computing device 2102 may be configured to implement particular instructions and/or functions corresponding to software and/or hardware modules. Accordingly, embodiments of modules that may be executed by the computing device 2102 as software may be implemented at least in part in hardware, for example, using computer-readable storage media and/or hardware components 2110 of the processing system 2104. These instructions and/or functions may be executed/operated by one or more articles of manufacture (e.g., one or more computing devices 2102 and/or processing systems 2104) to implement the techniques, modules, and examples described herein.
As further shown in fig. 21, the exemplary system 2100 enables a ubiquitous environment for a seamless user experience when running applications on a Personal Computer (PC), television device, and/or mobile device. When using applications, running video games, watching videos, etc., services and applications will run in a substantially similar manner in all three environments for the common user experience if transitioning from one device to the next.
In the example system 2100, multiple devices are connected to one another through a central computing device. The central computing device may be local to the plurality of devices or remote from the plurality of devices. In one embodiment, the central computing device may be a cloud of one or more server computers connected to the plurality of devices through a network, the internet, or other data communication link.
In one embodiment, this interconnect architecture enables functionality to be delivered across multiple devices to provide a common and seamless experience to users of the multiple devices. Each of the multiple devices may have different physical requirements and capabilities, and the central computing device may use the platform to enable delivery of an experience to the device that is customized for one device but generic for all devices. In one embodiment, a classification of the target device is created and the experience is customized for the general classification of the device. The device classes may be defined by physical characteristics, usage types, or other common characteristics of the devices.
In different embodiments, the computing device 2102 may assume a number of different configurations, for example, for use with a computer 2114, a mobile 2116, and a television 2118. Each of these configurations includes devices that may generally have different configurations and capabilities, whereby the computing device 2102 may be configured according to one or more different device classifications. For example, the computing device 2102 may be implemented as a computer 2114 in a device class that includes personal computers, desktop computers, multi-screen computers, laptop computers, netbooks, and so forth.
The computing device 2102 may also be implemented as a mobile 2116 in a device class that includes mobile devices, such as mobile phones, portable music players, portable gaming devices, tablet computers, multi-screen computers, and so forth. The computing device 2102 may also be implemented as a television 2118 in a device class that includes devices in an indeterminate viewing environment and having a larger screen or generally connected to a larger screen. These devices include televisions, set-top boxes, game consoles, and the like.
These different configurations of the computing device 2102 may support the techniques described herein, and the techniques described herein are not limited to only specific examples of the techniques described herein. The functionality may also be implemented in whole or in part using a distributed system, such as on the "cloud" 2120 via a platform 2122 as described below.
Cloud 2120 includes and/or is representative of platform 2122 of resources 2124. The platform 2122 abstracts underlying functionality of hardware (e.g., servers) and software resources of the cloud 2120. The resources 2124 can include applications and/or data that can be used when running computer processes on a server remote from the computing device 2102. The resources 2124 may also include services provided via the internet and/or through a subscriber network, such as a cellular or Wi-Fi network.
The platform 2122 may abstract resources and functionality to connect the computing device 2102 to other computing devices. The platform 2122 may also serve to abstract resource extensions to provide a corresponding level of extension for encountered requests directed to resources 2124 implemented via the platform 2122. Accordingly, in an interconnected device embodiment, implementations of functionality described herein may be distributed throughout the system 2100. For example, the functionality may be implemented in part on the computing device 2102 and may be implemented via the platform 2122 that abstracts the functionality of the cloud 2120.
Conclusion
Although the invention has been described in language specific to structural features and/or methodological acts, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts are disclosed as example forms of implementing the claimed invention.

Claims (11)

1. A method implemented by a computing device, the method comprising:
tracking active areas of a plurality of surfaces (1802) available to a composition system for rendering one or more visuals;
determining, by the composition system, that a first said active area of a first said surface may be contained within an allocation of a second said surface (1804); and
a first said active area is pushed down to be included as part of a second said surface (1806).
2. The method of claim 1, wherein the plurality of surfaces comprises virtual surfaces.
3. The method of claim 1, wherein the active area is active to be displayed by a display device of the computing device as part of a user interface and the inactive area is not to be displayed as part of the user interface.
4. The method of claim 1, wherein the push down is performed by copying a first of the active areas from a memory allocation associated with a first of the surfaces to a memory allocation associated with a second of the surfaces.
5. The method of claim 1, wherein a first said surface is assigned as an update after a second said surface is assigned.
6. The method of claim 1, further comprising removing an indication that a first of said active areas of a first of said surfaces is active.
7. The method of claim 6, further comprising releasing the first said surface.
8. The method of claim 7, wherein the releasing is performed in response to determining that the first said surface no longer includes active area after the push down.
9. A system implemented by a computing device, the system comprising one or more modules configured to perform operations comprising:
tracking (1902) active areas of a plurality of surfaces available to a composition system for rendering one or more visuals;
calculating an allocation for a new surface available to contain active areas from the plurality of surfaces (1904); and
the new surface is allocated so as to include active areas from the plurality of surfaces (1906).
10. One or more computer-readable storage media containing instructions stored thereon that, in response to execution by a computing device (102), cause the computing device to implement a composition system to perform operations comprising forming a mesh from a set of rectangles that does not contain a T-junction and invoking a driver to render a surface using the mesh.
11. One or more computer-readable storage media having computer-executable instructions embodied thereon that, when executed, cause a computing device to perform the method of any of claims 1-8.
HK14102287.2A 2012-05-31 2014-03-06 Virtual surface compaction HK1189289A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/485,815 2012-05-31

Publications (1)

Publication Number Publication Date
HK1189289A true HK1189289A (en) 2014-05-30

Family

ID=

Similar Documents

Publication Publication Date Title
JP6230076B2 (en) Virtual surface assignment
US9235925B2 (en) Virtual surface rendering
US10043489B2 (en) Virtual surface blending and BLT operations
US9959668B2 (en) Virtual surface compaction
CN106575228B (en) Render target command reordering in graphics processing
CN113112579A (en) Rendering method, rendering device, electronic equipment and computer-readable storage medium
TW201539294A (en) Cross-platform rendering engine
US10930040B2 (en) Graphic object modifications
US9324299B2 (en) Atlasing and virtual surfaces
JP5242788B2 (en) Partition-based performance analysis for graphics imaging
HK1189289A (en) Virtual surface compaction
CN111243069B (en) Scene switching method and system of Unity3D engine
CN115202792A (en) Method, apparatus, device and storage medium for scene switching
KR20230101803A (en) Motion estimation based on zonal discontinuity
HK1189287B (en) Virtual surface lookaside lists and gutters
HK1188864A (en) Virtual surface rendering
HK1189287A (en) Virtual surface lookaside lists and gutters
CN111701254B (en) A parallel accelerated display method for large-scale performance dynamic stage video
CN116627372A (en) PNL material alignment preview display method, device, equipment and storage medium
CN119722896A (en) Rendering method, device, equipment, storage medium and product based on dynamic atlas