US6847375B2 - Rendering process - Google Patents
Rendering process Download PDFInfo
- Publication number
- US6847375B2 US6847375B2 US10/193,880 US19388002A US6847375B2 US 6847375 B2 US6847375 B2 US 6847375B2 US 19388002 A US19388002 A US 19388002A US 6847375 B2 US6847375 B2 US 6847375B2
- Authority
- US
- United States
- Prior art keywords
- polygon
- pixel
- rendering
- specified pixel
- semi
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/503—Blending, e.g. for anti-aliasing
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
Definitions
- the present invention relates to a rendering process for displaying three-dimensional images on a two-dimensional screen, such as on a television monitor, a device used therefor, a recording medium having recorded thereon a rendering process program and such rendering process program.
- the three-dimensional polygon data are subjected to various geometric processes such as coordinate conversion, clipping and lighting, and the resultant data are further subjected to transparent projection conversion.
- the rendering processing device pastes textures having various colors and patterns onto polygons to thereby give desired colors and patterns to the objects.
- the three-dimensional polygons herein are to be expressed with a limited number of pixels on the two-dimensional screen, so that an image rendered on the two-dimensional screen will clearly have various disorders generally referred to as aliasing.
- the edge portion of an image traversing obliquely on the two-dimensional screen will have a step-like unsmoothness (so-called jaggedness) representing pixel profiles on the edge of the image.
- a conventional rendering processing device employs so-called anti-aliasing, which is a process for removing or preventing aliasing such as jaggedness.
- anti-aliasing is a process for removing or preventing aliasing such as jaggedness.
- a typical anti-aliasing process employed by the conventional rendering processing device for removing jaggedness is as follows.
- the rendering processing device first determines pixel coverage, and then sets an ⁇ value corresponding to such pixel coverage. The device then mixes a pixel color to be used as a background and a pixel color to be used as a foreground according to such ⁇ value, which successfully makes the jaggedness unrecognizable. This technique is adopted by an extremely large number of rendering processing devices since only a single time of processing will successfully yield effective anti-aliasing.
- the pixel coverage herein refers to a value for expressing a fraction of the area occupied by a polygon within one pixel, and is given by a real number ranging from 0 to 1. For example, any pixel not containing an edge portion will have a value of “1” for the pixel coverage.
- the ⁇ value refers to a degree of transparency (semi-transparency) used in the rendering process of the individual pixels, and is given by a real number ranging from 0 to 1.
- a pixel having a value of “1” for the ⁇ value is an opaque pixel.
- the foreground pixels are opaque, the color of such foreground pixels will never be mixed with the color of background pixels.
- the foregoing anti-aliasing technique is, however, not applicable to polygons which are semi-transparent from the beginning. That is, the foregoing anti-aliasing technique gives a pixel coverage value of “1” for all pixels other than those located in the edge portion, so that the ⁇ values for such pixels are inevitably set to “1”. This means that the foregoing anti-aliasing technique undesirably changes polygons which should intrinsically be semi-transparent into opaque ones. This is why the foregoing anti-aliasing technique is not applicable to polygons which are semi-transparent from the beginning.
- the present invention was proposed to address the foregoing problems, and an object thereof resides in providing a rendering process capable of subjecting semi-transparent polygons to anti-aliasing, a device used therefor, a recording medium having recorded thereon a rendering process program and such rendering process program.
- a value for expressing a fraction of the area of a specified pixel occupied by the polygon and a value for expressing a degree of transparency corresponding to the specified pixel are multiplied with each other to obtain a multiplied product.
- the multiplied product is reset as a new degree of transparency for the specified pixel.
- a preset color for the specified pixel is mixed with a color of another pixel adjacent the specified pixel and not in the polygon based on the multiplied product.
- a value obtained by multiplying a source ⁇ by the pixel coverage is reset as a new ⁇ value, based on which ⁇ -blending is carried out. This allows semi-transparent polygons to be processed by anti-aliasing without being changed into opaque polygons.
- FIG. 1 is a drawing of a semi-transparent polygon rendered as being laid across an opaque polygon and a background;
- FIG. 2 is an enlarged view of an edge portion of the semi-transparent polygon rendered on the opaque polygon
- FIG. 3 is a block diagram showing an exemplary specific constitution of a device for implementing a rendering process including anti-aliasing
- FIG. 4 is a block diagram showing the schematic constitution of a computer implementing the rendering process.
- FIG. 5 is a flow chart of a rendering process executed by a computer.
- the source a values of the individual pixels composing the semi-transparent polygon PGb are denoted as ⁇ b
- the color of the individual pixels composing the opaque polygon PGa is denoted as Ca
- the color of the individual pixels composing the semi-transparent polygon PGb is denoted as Cb
- the color of the individual pixels composing the background BGc is denoted as Cc.
- the pixel color Ca of the opaque polygon PGa and the pixel color Cc of the background BGc are used as destination colors, and such destination colors (Ca, Cc) are ⁇ -blended with the pixel color Cb of the semi-transparent polygon PGb.
- ⁇ -blending refers to a rendering technique by which pixels composing different polygons are rendered on the same two-dimensional coordinates by mixing the individual colors thereof according to the ⁇ values.
- CTa the pixel color in the overlapped area of the semi-transparent polygon PGb and the opaque polygon PGa obtained after ⁇ -blending
- CTc the pixel color in the overlapped area of the semi-transparent polygon PGb and the background BGc obtained after ⁇ -blending
- the rendering processing device generates information on the pixel coverage taking the source ⁇ value ⁇ b of the semi-transparent polygon PGb into consideration, and uses such pixel coverage as an ⁇ value in the ⁇ -blending, to thereby complete anti-aliasing for the semi-transparent polygon PGb.
- a procedure by which the rendering processing device of the present invention generates such information on the pixel coverage taking the source ⁇ value ⁇ b of the semi-transparent polygon PGb into consideration is as follows.
- CTAa Cov*CTa+ (1 ⁇ Cov )* Ca (3)
- CTAc Cov*CTc+ (1 ⁇ Cov )* Cc (4) where pixel coverage value Cov equals “1” when the pixel is completely (by 100%) covered with the semi-transparent polygon PGb, and equals “0” when the pixel is not covered at all.
- FIG. 2 is an enlarged view of the boundary portion of the semi-transparent polygon PGb and opaque polygon PGa shown in FIG. 1 so that formula (5) is adapted for ⁇ -blending.
- the reference symbol Eb in FIG. 2 denotes an edge boundary of the semi-transparent polygon PGb.
- Reference symbols p 1 to p 6 , p 11 to p 16 and p 21 to p 25 respectively represent the pixels.
- Pixel coverage value Cov is now defined as 0.2 for the pixels p 1 to p 6 , 0.8 for the pixels p 11 to p 16 , and 1 for the pixels p 21 to p 25 .
- the source ⁇ value ⁇ b (preset a value) of the semi-transparent polygon PGb is defined now as 0.5. Note that in FIG. 2 the pixel color of the semi-transparent polygon PGb is expressed as Cb and the pixel color of the opaque polygon PGa is expressed as Ca.
- the pixel color CTAa of the pixels p 1 to p 6 after anti-aliasing based on formula (5) is expressed by formula (7) below:
- CTAa (0.2*0.5) Cb+ (1 ⁇ (0.2*0.5))*
- Ca 0.1 *Cb+ 0.9* Ca (7)
- Formula (7) indicates that the pixel colors Cb and Ca are mixed at a ratio of (0.1*Cb+0.9*Ca).
- Formula (8) indicates that the pixel colors Cb and Ca are mixed at a ratio of (0.4*Cb+0.6*Ca).
- Formula (9) indicates that the pixel colors Cb and Ca are mixed at a ratio of (0.5*Cb+0.5*Ca).
- the pixels p 1 to p 6 and p 11 to p 16 including the edge boundary Eb shown in FIG. 2 will have anti-aliasing depending on the pixel coverage.
- the pixels p 21 to p 25 not including the edge boundary Eb will never be turned into opaque even after anti-aliasing so that the opaque polygon PGa can be seen through the semi-transparent polygon PGb.
- the rendering processing device multiplies a source ⁇ value of the semi-transparent polygon by the pixel coverage, and uses the thus obtained multiplied product as a new ⁇ value for ⁇ -blending to thereby enable anti-aliasing of such semi-transparent polygon in only a single time of processing.
- FIG. 3 shows an exemplary specific constitution of the rendering processing device responsible for the foregoing anti-aliasing.
- the constitution shown in FIG. 3 is one example by which the rendering process of the present embodiment is carried out by hardware such as a digital signal processor (DSP) or a graphic processor (GP).
- DSP digital signal processor
- GP graphic processor
- the individual components shown in FIG. 3 correspond to the individual internal processing units of such DSP or GP.
- a memory 51 stores graphic information such as polygons (apex information or apex-linked information such as coordinate values for apexes, RGB apex color values, map coordinate values and vector values).
- the graphic information herein is previously captured by being read out from various recording media such as a CD-ROM, DVD-ROM or semiconductor memory, or by being downloaded through communication or transmission media based on line or radio communication.
- a CPU 58 controls operations of the individual units based on a control program.
- a geometry calculation unit 50 retrieves stored graphic information from the memory 51 , and then subjects the retrieved graphic information to so-called affine transformation, projection conversion onto a screen coordinate, and light source processing for the apexes.
- the graphic information after the projection conversion (polygon data) is sent to a rendering unit 52 .
- the rendering unit 52 is responsible for calculation for displaying polygons on the screen, and converts polygon data sent from the geometry calculation unit 50 into pixels.
- the rendering unit 52 can roughly be divided into a polygon setup/rasterizing unit 61 (hereinafter, simply abbreviated as PSR unit 61 ), a pixel pipeline unit 63 and a frame buffer 64 .
- the rendering unit 52 is provided with a texture buffer 54 and a Z buffer 55 .
- the texture buffer 54 stores texel colors of textures that are R, G, B values and ⁇ values (A) for defining pixel colors for polygons.
- the Z buffer 55 stores Z values which express the depth-wise distance of an image from a viewpoint.
- the texture and Z values herein are previously captured by being read out from various recording media such as a CD-ROM, DVD-ROM or semiconductor memory, or by being downloaded through communication or transmission media based on line or radio communication.
- the PSR unit 61 is provided with a constitution for enabling linear interpolation which is known as so-called DDA (digital differential analysis).
- the PSR unit 61 is responsible for retrieving and buffering polygon data sent from the geometry calculation unit 50 , pixel generation through rasterizing, and calculation of texel coordinate values. Pixel data and texel coordinate values are sent to the pixel pipeline unit 63 .
- the PSR unit 61 is also provided with a pixel coverage parameter generation unit 62 (hereinafter, simply abbreviated as PCP unit 62 ) for finding the pixel coverage value Cov expressing the ratio of area occupied by a polygon within one pixel.
- PCP unit 62 pixel coverage parameter generation unit 62
- the pixel pipeline unit 63 determines the individual pixel colors based on the texel coordinate values received from the PSR unit 61 and reference to texel colors obtained from the texture buffer 54 , and then executes texture mapping taking the Z values stored in the Z buffer 55 into consideration.
- the pixel pipeline unit 63 is also provided with a multiplication unit 71 and an ⁇ -blending unit 72 .
- the multiplication unit 71 multiplies the pixel coverage value Cov obtained from the PCP unit 62 by the ⁇ value ⁇ of each pixel obtained from the texture buffer 54 .
- the ⁇ -blending unit 72 performs ⁇ -blending for every pixel.
- the pixel pipeline unit 63 performs the calculations expressed by formulae (5) and (6).
- the multiplication unit 71 is activated, and the ⁇ value ⁇ b of the semi-transparent polygon PGb is multiplied by the pixel coverage value Cov.
- the ⁇ -blending unit 72 then performs ⁇ -blending of each pixel using the obtained multiplied product as a new ⁇ value.
- the multiplication unit 71 of the pixel pipeline unit 63 is inactivated for anti-aliasing of opaque polygons. More specifically, the multiplication unit 71 herein does not perform multiplication. Instead, the ⁇ -blending unit 72 sets ⁇ values corresponding to the pixel coverage values for such opaque polygons, and performs ⁇ -blending depending on the newly set ⁇ values.
- aliasing will not always be very distinctive, even when the polygons are not subjected to anti-aliasing, if such polygons have an extremely high transparency. It is therefore also allowable in the rendering processing device of the present embodiment to disable the multiplication step by inactivating the multiplication unit 71 of the pixel pipeline unit 63 for semi-transparent polygons which do not show distinct aliasing by virtue of their extremely high transparency, and thus have only a small necessity for anti-aliasing.
- the ⁇ -blending unit 72 in this case performs ⁇ -blending depending on the ⁇ values preset to the semi-transparent polygons.
- the rendering processing device of the present embodiment can also regulate activation/inactivation of the multiplication step by the multiplication unit 71 depending on the details of the image to be rendered. More specifically, the rendering processing device will inactivate the multiplication step by the multiplication unit 71 when the aliasing occurs in an image of a less important scene and exerts only a limited degree of visual influence to such image if aliasing should occur to some degree.
- the ⁇ -blending unit 72 in this case performs ⁇ -blending depending on the ⁇ values preset to the polygons in the image of such less important scene.
- the rendering processing device can activate the multiplication unit 71 when an opaque polygon is subjected to anti-aliasing, or even when a semi-transparent polygon less affected by aliasing or a polygon in a less important scene is to be handled. It is to be noted, however, that it is advantageous for the rendering processing device to inactivate the multiplication unit 71 for opaque polygons, semi-transparent polygons less affected by aliasing or polygons in a less important scene in terms of relieving the device from the calculation load.
- the rendering processing device is designed to freely select activation or inactivation of anti-aliasing depending on the details of the image to be rendered, the transparency of the polygon and so forth, so that the device can execute a proper rendering process as required. Since various rendering processes depending on need are executable in the present embodiment, a larger degree of freedom will be ensured in developmental efforts for application software for image rendering, which allows the software supplier to freely produce his or her desired software.
- Set values for setting activation or inactivation of the multiplication unit 71 are provided on a register 53 .
- Which set values are to be output from the register 53 is controlled, for example, by the CPU 58 based on the control program. More specifically, when anti-aliasing is to be performed for a semi-transparent polygon, the CPU 58 controls the register 53 so as to output a set value to activate the multiplication unit 71 . On the other hand, when anti-aliasing is to be performed for an opaque polygon, or when anti-aliasing is less necessary, the CPU 58 controls the register 53 so as to output a set value to render the multiplication unit 71 inactive.
- the CPU 58 can also determine whether a source a value of a polygon expresses a high transparency or low transparency, and switch the activation/inactivation of the multiplication unit 71 in a real-time manner based on such determination.
- the individual pixel data output from the pixel pipeline unit 63 are sent to the frame buffer 64 .
- the frame buffer 64 is provided with a memory space corresponding to the display (screen) 57 , such as a television monitor, in which color values of the individual pixels will be written. Screen data by frames are thus formed in such memory space, and are read out upon request by a display control unit 56 .
- the display control unit 56 generates horizontal synchronizing signals and vertical synchronizing signals of the television monitor device, and also serially retrieves pixel data from the frame buffer 64 in a line-feed manner in synchronization with the display timing on the monitor.
- the serially-retrieved, line-fed color values compose a two-dimensional image which will be displayed on the display 57 .
- the rendering process of the present embodiment is not only achievable through such hardware constitution shown in FIG. 3 , but can, of course, also be implemented on a software basis (application programs for a computer).
- FIGS. 4 and 5 show the constitution and operation of a computer on which the rendering process is implemented.
- FIG. 4 shows an exemplary constitution of the principal portion of the computer.
- FIG. 5 shows a process flow according to which a CPU 123 of the computer shown in FIG. 4 executes the rendering process program of the present invention.
- a storage unit 128 typically comprises a hard disk and a drive therefor.
- Such storage unit 128 has stored therein an operating system program, a computer program 129 including the rendering process program of the present embodiment read out from various recoding media, such as a CD-ROM or DVD-ROM, or downloaded through a communication line, and a variety of data 130 , such as graphic information for polygon rendering, and RGBA values and Z values for textures.
- a communication unit 121 refers to a communication device responsible for data communication with external devices, which may be a modem for establishing connection to an analog public telephone line, a cable modem for establishing connection to a cable television network, a terminal adaptor for establishing connection to an ISDN (integrated services digital network), or a modem for establishing connection to an ADSL (asymmetric digital subscriber line).
- a communication interface (I/F) unit 122 refers to an interface device responsible for protocol transfer to enable send/receive of data between the communication unit 121 and an internal bus.
- An input unit 133 refers to an input device, such as a keyboard, mouse or touch pad, and a user interface (I/F) unit 132 refers to an interface device for supplying signals from such input unit 133 to the internal components.
- I/F user interface
- a drive unit 135 refers to a drive device capable of reading out various data or programs from a recording medium including a disk medium 151 , such as a CD-ROM, DVD-ROM or floppy (trade mark) disk, or from a card-type or other type of semiconductor memory.
- a drive interface (I/F) unit 134 refers to an interface device for supplying signals from the drive unit 135 to the internal components.
- a display unit 137 refers to a display device, such as a CRT (cathode ray tube) or liquid crystal display, and a display drive unit 136 is a device for driving such display unit 137 .
- a display device such as a CRT (cathode ray tube) or liquid crystal display
- a display drive unit 136 is a device for driving such display unit 137 .
- the CPU 123 controls the entire operation of the personal computer based on the operating system program stored in the storage unit 128 or the computer program 129 of the present embodiment.
- a ROM 124 typically comprises a rewritable non-volatile memory, such as a flash memory, and stores a BIOS (basic input/output system) and various default values of the personal computer.
- a RAM 125 will have loaded therein application programs and various data read out from a hard disk of the storage unit 128 , and is used as a work RAM of the CPU 123 .
- the CPU 123 can accomplish the image processing described in the above embodiment by executing the rendering process program of the embodiment, which is one of the application programs read out from the storage unit 128 and loaded into the RAM 125 .
- step S 1 shown in FIG. 5 the CPU 123 retrieves from the storage unit 128 graphic information for polygon rendering, and RGBA values and Z values for textures stored therein as data 130 , and allows the RAM 125 to hold them.
- the CPU 123 retrieves the graphic information held by the RAM 125 , and subjects the graphic information to geometry calculation and perspective conversion, such as affine conversion, projection conversion onto a screen coordinate, and light source processing for the apexes.
- geometry calculation and perspective conversion such as affine conversion, projection conversion onto a screen coordinate, and light source processing for the apexes.
- step S 3 the CPU 123 performs rasterizing using polygon data obtained by the geometric calculation, and then, in step S 4 , determines whether anti-aliasing is or is not necessary. If it is determined that anti-aliasing is necessary, the processing of the CPU 123 advances to step S 5 , and if not, to step S 8 .
- step S 5 the CPU 123 generates a pixel coverage value.
- step S 6 the CPU 123 determines whether the polygon is semi-transparent or opaque. If the polygon is found in step S 6 to be semi-transparent, the pixel coverage value is multiplied by the ⁇ value in step S 7 . The CPU 123 then, in step S 8 , performs ⁇ -blending using such multiplication product as a new ⁇ value. This successfully finishes anti-aliasing for the semi-transparent polygon in which aliasing tends to be conspicuous.
- step S 6 if the polygon was found in step S 6 to be opaque, the CPU 123 performs ⁇ -blending in step S 8 using the pixel coverage value obtained in step S 5 as an ⁇ value, to thereby subject such opaque polygon to anti-aliasing.
- the semi-transparent polygon already determined in step S 4 as not being in need of anti-aliasing, or a polygon in an less-important scene, is subjected to ⁇ -blending in step S 8 using ⁇ values preset to such polygons.
- the CPU 123 then, in step S 9 , generates a screen image from the pixel data, and sends information on such screen image to the display drive 136 in step S 10 . An image will thus appear on the display unit 137 .
- the rendering processing device of the present embodiment implements anti-aliasing for a semi-transparent polygon which is in need of such processing by multiplying a source ⁇ value of such semi-transparent polygon by a pixel coverage value, and by then performing ⁇ -blending using the multiplied product as a new ⁇ value.
- the rendering processing device of the present embodiment performs ⁇ -blending also for an opaque polygon by using a general pixel coverage value as an ⁇ value.
- the rendering processing device can also control processing so as to perform only general ⁇ -blending without effecting anti-aliasing for semi-transparent polygons in which aliasing is inconspicuous, which successfully relieves the device from processing loads.
- the rendering process in the present embodiment can select activation/inactivation of anti-aliasing depending on the details of an image to be rendered or the transparency of the polygon, so that the device can execute the proper rendering process as required.
- Another advantage of the rendering process of the present embodiment is that a larger degree of freedom will be ensured in developmental efforts for application software.
- the rendering process of the present embodiment is applicable not only to a specialized video game machine or personal computer, but also to various information processing devices including portable phone terminals. While color images were exemplified by the present embodiment, the present invention is also applicable to monochrome images.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Image Generation (AREA)
Abstract
Description
CTa=αb*Cb+(1−αb)*Ca (1)
CTc=αb*Cb+(1−αb)*Cc (2)
CTAa=Cov*CTa+(1−Cov)*Ca (3)
CTAc=Cov*CTc+(1−Cov)*Cc (4)
where pixel coverage value Cov equals “1” when the pixel is completely (by 100%) covered with the semi-transparent polygon PGb, and equals “0” when the pixel is not covered at all.
CTAa=(Cov*αb)*Cb+(1−(Cov*αb))*Ca (5)
CTAc=(Cov*αb)*Cb+(1−(Cov*αb))*Cc (6)
where the term (Cov*αb)*Cb in formula (5) corresponds with the term αb*Cb in formula (1), and the term (Cov*αb)*Cb in formula (6) corresponds with the term αb*Cb in formula (2). Similarly, the term (1−(Cov*αb))*Ca in formula (5) corresponds with the term (1−αb)*Ca in formula (1), and the term (1−(Cov*αb))*Cc in formula (6) corresponds with the term (1−αb)*Cc in formula (2). That is, these formulae (5) and (6) express processing equivalent to the general α-blending except that they use, as a new α value, a source α value of the semi-transparent polygon PGb multiplied by the pixel coverage Cov.
CTAa=(0.2*0.5)Cb+(1−(0.2*0.5))*Ca=0.1*Cb+0.9*Ca (7)
Formula (7) indicates that the pixel colors Cb and Ca are mixed at a ratio of (0.1*Cb+0.9*Ca).
CTAa=(0.8*0.5)Cb+(1−(0.8*0.5))*Ca=0.4*Cb+0.6*Ca (8)
Formula (8) indicates that the pixel colors Cb and Ca are mixed at a ratio of (0.4*Cb+0.6*Ca).
CTAa=(1*0.5)Cb+(1−(1*0.5))*Ca=0.5*Cb+0.5*Ca (9)
Formula (9) indicates that the pixel colors Cb and Ca are mixed at a ratio of (0.5*Cb+0.5*Ca).
Claims (7)
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001213767 | 2001-07-13 | ||
JP2001-213767 | 2001-07-13 | ||
JP2002-028026 | 2002-02-05 | ||
JP2002028026A JP2003091737A (en) | 2001-07-13 | 2002-02-05 | Plotting processor, recording medium with plotting processing program recorded thereon, plotting processing program and plotting processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20030020712A1 US20030020712A1 (en) | 2003-01-30 |
US6847375B2 true US6847375B2 (en) | 2005-01-25 |
Family
ID=26618701
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/193,880 Expired - Lifetime US6847375B2 (en) | 2001-07-13 | 2002-07-12 | Rendering process |
Country Status (5)
Country | Link |
---|---|
US (1) | US6847375B2 (en) |
EP (1) | EP1408454A1 (en) |
JP (1) | JP2003091737A (en) |
KR (1) | KR20040011525A (en) |
WO (1) | WO2003009236A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050068321A1 (en) * | 2003-09-25 | 2005-03-31 | Jiao Yang (Jeff) | Anti-aliasing line pixel coverage calculation using programmable shader |
US20050134739A1 (en) * | 2003-12-22 | 2005-06-23 | Bian Qixiong J. | Controlling the overlay of multiple video signals |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4229319B2 (en) * | 2003-05-12 | 2009-02-25 | 株式会社バンダイナムコゲームス | Image generation system, program, and information storage medium |
US9105113B1 (en) * | 2004-12-15 | 2015-08-11 | Nvidia Corporation | Method and system for efficiently rendering circles |
US20070133019A1 (en) * | 2005-12-13 | 2007-06-14 | Microsoft Corporation | Transparency printing |
JP4693660B2 (en) | 2006-03-10 | 2011-06-01 | 株式会社東芝 | Drawing apparatus, drawing method, and drawing program |
JP2008164882A (en) * | 2006-12-28 | 2008-07-17 | Nec Electronics Corp | Image processor and processing method |
US8405679B2 (en) | 2008-09-09 | 2013-03-26 | Citrix Systems, Inc. | Methods and systems for per pixel alpha-blending of a parent window and a portion of a background image |
WO2010046792A1 (en) * | 2008-10-21 | 2010-04-29 | Nxp B.V. | Method of edge anti-aliasing a graphics geometry and a vectorgraphics processor for executing the same |
CN106682424A (en) * | 2016-12-28 | 2017-05-17 | 上海联影医疗科技有限公司 | Medical image adjusting method and medical image adjusting system |
CN109568961B (en) * | 2018-12-04 | 2022-06-21 | 网易(杭州)网络有限公司 | Occlusion rate calculation method and device, storage medium and electronic device |
US20230196627A1 (en) * | 2021-12-16 | 2023-06-22 | Meta Platforms Technologies, Llc | Anti-aliasing by encoding primitive edge representations |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08235380A (en) | 1995-02-23 | 1996-09-13 | Nec Corp | Method and device for displaying polyhedron |
JPH09282473A (en) | 1996-04-16 | 1997-10-31 | Fujitsu Ltd | Anti-aliasing method and circuit, and image processing apparatus |
JP2000285256A (en) | 1999-03-31 | 2000-10-13 | Sharp Corp | 3D image processing device |
US6271850B1 (en) * | 1997-10-28 | 2001-08-07 | Matsushita Electric Industrial Co., Ltd. | Image generation apparatus, image generation method, image generation program recording medium, image composition apparatus, image composition method, and image composition program recording medium |
US6429877B1 (en) * | 1999-07-30 | 2002-08-06 | Hewlett-Packard Company | System and method for reducing the effects of aliasing in a computer graphics system |
-
2002
- 2002-02-05 JP JP2002028026A patent/JP2003091737A/en active Pending
- 2002-04-18 EP EP02720487A patent/EP1408454A1/en not_active Withdrawn
- 2002-04-18 KR KR10-2003-7015886A patent/KR20040011525A/en not_active Withdrawn
- 2002-04-18 WO PCT/JP2002/003856 patent/WO2003009236A1/en not_active Application Discontinuation
- 2002-07-12 US US10/193,880 patent/US6847375B2/en not_active Expired - Lifetime
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08235380A (en) | 1995-02-23 | 1996-09-13 | Nec Corp | Method and device for displaying polyhedron |
JPH09282473A (en) | 1996-04-16 | 1997-10-31 | Fujitsu Ltd | Anti-aliasing method and circuit, and image processing apparatus |
US6271850B1 (en) * | 1997-10-28 | 2001-08-07 | Matsushita Electric Industrial Co., Ltd. | Image generation apparatus, image generation method, image generation program recording medium, image composition apparatus, image composition method, and image composition program recording medium |
JP2000285256A (en) | 1999-03-31 | 2000-10-13 | Sharp Corp | 3D image processing device |
US6429877B1 (en) * | 1999-07-30 | 2002-08-06 | Hewlett-Packard Company | System and method for reducing the effects of aliasing in a computer graphics system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050068321A1 (en) * | 2003-09-25 | 2005-03-31 | Jiao Yang (Jeff) | Anti-aliasing line pixel coverage calculation using programmable shader |
US7164430B2 (en) * | 2003-09-25 | 2007-01-16 | Via Technologies, Inc. | Anti-aliasing line pixel coverage calculation using programmable shader |
US20050134739A1 (en) * | 2003-12-22 | 2005-06-23 | Bian Qixiong J. | Controlling the overlay of multiple video signals |
US7486337B2 (en) * | 2003-12-22 | 2009-02-03 | Intel Corporation | Controlling the overlay of multiple video signals |
Also Published As
Publication number | Publication date |
---|---|
US20030020712A1 (en) | 2003-01-30 |
EP1408454A1 (en) | 2004-04-14 |
JP2003091737A (en) | 2003-03-28 |
WO2003009236A1 (en) | 2003-01-30 |
KR20040011525A (en) | 2004-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6876360B2 (en) | Image generation method and device used thereof | |
US6847375B2 (en) | Rendering process | |
JP2000149053A (en) | Image processing apparatus including blending processing and method thereof | |
US7167596B2 (en) | Image processing method for generating three-dimensional images on a two-dimensional screen | |
JP2005122361A (en) | Image processor, its processing method, computer program, and recording medium | |
US6812931B2 (en) | Rendering process | |
JP3715222B2 (en) | Drawing method, drawing device, drawing processing program, recording medium recording drawing processing program, and drawing processing program execution device | |
US6903746B2 (en) | Rendering processing method | |
JPH04343185A (en) | Apparatus and method for generating graphic image | |
JP2003150146A (en) | Image processor and image processing program | |
JP3696584B2 (en) | Drawing processing method and apparatus, semiconductor device, drawing processing program, and recording medium | |
EP0486195A2 (en) | Computer graphics system | |
JP2003066943A (en) | Image processor and program | |
JPH10247241A (en) | Convolution scan line rendering | |
US20030071825A1 (en) | Image rendering method | |
KR20030082445A (en) | Facilitating interaction between video renderers and graphics device drivers | |
JP3652586B2 (en) | Image drawing system | |
JP3688765B2 (en) | Drawing method and graphics apparatus | |
JP2001109902A (en) | Texture blending method and image display device using the same | |
JP2005077750A (en) | Display device and character display control method | |
JPH08212355A (en) | Figure processing method | |
JP2006162819A (en) | Image processing apparatus, image processing method, image processing program and recording medium | |
JP2004158032A (en) | Drawing processing program to be executed by computer, recording medium with the program recorded thereon, program execution device, drawing device and method | |
JPH08110954A (en) | Image processing method, image processing device and circuit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:WADA, SHINYA;REEL/FRAME:013374/0981 Effective date: 20020912 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FEPP | Fee payment procedure |
Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: SONY NETWORK ENTERTAINMENT PLATFORM INC., JAPAN Free format text: CHANGE OF NAME;ASSIGNOR:SONY COMPUTER ENTERTAINMENT INC.;REEL/FRAME:027445/0549 Effective date: 20100401 |
|
AS | Assignment |
Owner name: SONY COMPUTER ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SONY NETWORK ENTERTAINMENT PLATFORM INC.;REEL/FRAME:027449/0303 Effective date: 20100401 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |