[go: up one dir, main page]

WO1995008161A1 - Power domain algorithms for fractal image decompression - Google Patents

Power domain algorithms for fractal image decompression Download PDF

Info

Publication number
WO1995008161A1
WO1995008161A1 PCT/GB1994/002000 GB9402000W WO9508161A1 WO 1995008161 A1 WO1995008161 A1 WO 1995008161A1 GB 9402000 W GB9402000 W GB 9402000W WO 9508161 A1 WO9508161 A1 WO 9508161A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
node
pixel
tree
level
Prior art date
Application number
PCT/GB1994/002000
Other languages
French (fr)
Inventor
Abbas Edalat
Original Assignee
Imperial College Of Science, Technology & Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imperial College Of Science, Technology & Medicine filed Critical Imperial College Of Science, Technology & Medicine
Priority to AU76199/94A priority Critical patent/AU7619994A/en
Publication of WO1995008161A1 publication Critical patent/WO1995008161A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame

Definitions

  • the present invention relates to image decompression, and particularly to the decompression of images stored in Iterated Function System (IFS) format, either with or without probabilities.
  • IFS Iterated Function System
  • IFS format Brief details of IFS format, and how it is currently used as a convenient format in which to store compressed images are given in section 1 below.
  • the two known algorithms which are commonly used for decompressing an image stored in IFS format, the random iteration algorithm and the greyscale photocopy algorithm are both relatively slow.
  • both operate by iterative processes, and it is normally impossible to decide in advance how many iterations will be required to produce a reasonable decompressed image at a particular resolution.
  • the number of iterations to be used is either decided arbitrarily in advance, or (when the decompression is being carried out interactively) the user simply watches the image building up on a computer screen and stops the process when the image appears subjectively acceptable. Neither of these is of course ideal.
  • the number of iterations is indeterminate, it is not possible to provide an analytical complexity analysis to determine how long it will take each of the prior art algorithms to decompress a given image to a particular resolution.
  • a method of decompressing a compressed image stored in IFS format having N maps f 1 ... f N . the image to be decompressed to a resolution corresponding to a pixel size q comprising:
  • nodes f 1 X 0 ... f N X 0 to produce a second-level series of nodes f l f l x 0 ...
  • nodes are defined as leaf nodes
  • clauses (c) to (g) are separate steps which will necessarily be carried out sequentially.
  • each pixel of the decompressed image can be calculated immediately that the first leaf node corresponding to that pixel has been found. That is the case with the Plotkin power domain algorithm, in which the IFS generates a black and white image.
  • an individual pixel will be set to white (if the ground is black) or black (if the ground is white) immediately that a leaf node is located which corresponds to that particular pixel. That particular pixel may then be output or stored (step (g) in Claim 1) immediately, without waiting for the entire tree to be built up.
  • the tree will not necessarily be built up by applying clauses (c) to (e) strictly sequentially.
  • the tree is searched branch by branch, rather than level by level, with each branch being followed until it results in a leaf node.
  • the algorithm may then back-up the tree to the next closest branching point, and follow the branch on from there until it reaches the next leaf node.
  • the tree may be sequentially searched by this method until all leaf nodes have been found.
  • the tree is searched level by level; in other words the first level of nodes is built up from the root point, and each node on that level is checked to see whether it is a leaf node.
  • the algorithm then proceeds sequentially through all of the nodes on the first level, constructing branches from each of those nodes to the nodes on the second level. As the second level nodes are being built up, the algorithm systematically checks whether each newly created node is a leaf node.
  • the algorithm which constructs the tree does not form any additional nodes onto those nodes which have been identified as leaf nodes.
  • the tree may be conveniently searched by a recursive algorithm, for example an algorithm having a recursively-defined procedure which takes as input a given node and produces as output all those next-level nodes which hang from the input node.
  • a recursive algorithm for example an algorithm having a recursively-defined procedure which takes as input a given node and produces as output all those next-level nodes which hang from the input node.
  • the decompressed image may be in black and white, with each pixel in the decompressed being white if one or more of the leaf nodes corresponds to that pixel, and black otherwise; or vice versa.
  • Such an image would normally be produced from a compressed image stored in IFS format without probabilities.
  • a black and white image may be obtained from an IFS image with probabilities, simply by ignoring the probabilities. That will represent the support of the coloured or greyscale image.
  • An image stored in IFS format with probabilities may be decoded as a greyscale image or as a coloured image (with suitable mapping).
  • each node in the tree has a node weight associated with it, each node weight being the product of the probabilities corresponding to all of the maps that make up the said node.
  • the initial or root node corresponds to the unit square, and that is given weight 1. All of the nodes at any particular level have a cumulative weight of 1.
  • the colour or density of each pixel in the decompressed image may be a function of the sum of the node weights of all the leaf nodes corresponding to the said pixel.
  • the maximum size of the parallelogram corresponding to each individual node may be determined by calculating, for example, the maximum length that the side of the parallelogram can take.
  • the maximum size is determined according to the contractivity of the various transformations making up the parallelogram.
  • the maximum size of the parallelogram may be taken as the product of the contractivities of the transformations which make up the corresponding node.
  • the repeated mappings may be applied to a single point on or within the parallelogram. Since each parallelogram must eventually shrink to the size of a single pixel, it can be guaranteed that applying the transformations repeatedly to a single point within the parallelogram will ensure that the position of that point eventually converges to the same pixel.
  • a convenient choice for the root point is the centre of the area A, within which the image is to be reconstructed. Because of the nature of the transformations, each transformation applied to the central point Xo will map that point to the exact centre of the parallelogram which results from applying that pa ⁇ icular mapping to A.
  • the area A is taken to be the unit square, by suitable scaling if necessary (ie the area A may be rescaled to the unit square between (a) and (b) of Claim 1).
  • the individual pixel size may then be taken as 1/r, where r is the screen resolution at which the decompressed image is to be plotted. Alternatively, one could of course equivalently take the pixel size to be one unit square, in which case the area A would have side r.
  • each of the maps f 1 ...f N are applied to the same point X 0 .
  • fractal image compression In fractal image compression, one seeks to encode and decode a colour or a greyscale digitised image on the computer screen.
  • a colour map a colour or a greyscale image can be presented by assigning a number to each pixel indicating its brightness or intensity. We can think of this number as the weight of the pixel. Any section of the image will then have a total weight equal to the sum of the weights of its pixels. It is technically convenient to rescale the weights of pixels so that the screen as a whole, consisting of say r x r pixels (where r is the resolution of the screen), has unit weight in total. In this way, we say that the image is represented by a normalised measure on the screen, which we can regard as the unit square.
  • any normalised measure on the screen i.e any distribution of weights on pixels which add up to one corresponds to an image.
  • the encoding of an image is achieved by doing a self-tiling of the image using a family of weighted afl ⁇ ne transformations of the plane. We will describe this in more detail in section 1.3.
  • An affine transformation of the plane is a combination of a rotation, a rescaling, a sheer and a translation in the plane.
  • Any affine transformation f : R 2 ⁇ R 2 of the plane has the form
  • the matrix is the linear part of the transformation; it is the combination of a rotation, a rescaling and a sheer.
  • the vector is the linear part of the transformation; it is the combination of a rotation, a rescaling and a sheer.
  • An affine transformation is contracting if there is a number s with 0 ⁇ s ⁇ 1 such that the distance between any two points (x 1 , y 1 ) and (x 2 ,y 2 ) on the plane is contracted by at least a factor s, i.e.
  • an IFS with probabilities determines a unique normalised measure, i.e. a unique image on the plane, called the invariant measure of the IFS. Therefore, we can say that the IFS encodes this measure or image.
  • the existence and uniqueness of the invariant measure is shown in [Bar88, page 336].
  • the IFS with probabilities with the code given in the table above determines the image of a fern shown in Figure 1. The picture on the left is the actual greyscale image, whereas the picture on the right gives the support of the image in white.
  • the greyscale photo copy algorithm is equipped with an operator M, called the Markov operator, which given any image ⁇ as input produces a new image M( ⁇ ) as output.
  • M is a digitised image, i.e. for each pixel z kl (l ⁇ k, l ⁇ ⁇ ) we have ⁇ (z kl ) ⁇ 0 with ,
  • f i -1 ( z kl ) is the set of all pixels which are mapped to z kl by f i .
  • the algorithm starts with an initial digitised image ⁇ and repeatedly applies M until a convergence to an image takes place. This limiting image is then the required image: It is the digitised approximation to ⁇ *. In most practical embodiments, the initial image is taken to be a uniformly grey image.
  • the random iteration algorithm generates the image corresponding to an IFS with probabilities
  • the random iteration algorithm To use the random iteration algorithm one has to fix a number n of iterations in advance, which for each IFS with probabiUties is found by trial and error for any given resolution of the screen. Furthermore, the selection of the integer i from 1,2, . .. , N with probability p i is quite time consuming as it takes about log 2 N computational steps. Last but not least, the algorithm, as it is random, may fail, during the n iterations specified, to visit some of the pixels which have non-zero ⁇ * value, i.e. which in fact lie on the support of the original image.
  • Each term p i ⁇ o f i -1 in the above sum represents a copy of the original image transformed by the map /, and attenuated in brightness by the factor p i . Therefore the above equation represents a self-tiling, or a collage, of ⁇ . Therefore, one proceeds by first assuming all probabiUties are zero except p l . Then h and p i are chosen such that P 1 ⁇ o f 1 -1 approximates a part of the image ⁇ . Next p 2 is allowed to be non-zero, and f 2 and p 2 are chosen so that p 1 ⁇ o f 1 -1 -+ p 2 ⁇ o f 2 -1 approximates a larger part of the image.
  • Figure 2 shows a fern (to be considered as a black and white image) on the left, a collage of four tiles with the following IFS code in the middle, and the decompressed image on the right.
  • Figure 1 shows an image created from an IFS with probabiUties. along with the corresponding support:
  • Figure 2 shows an image to be encoded, the tiling for an IFS code, and the corresponding decompressed image
  • Figure 3 illustrates in schematic form a tree which is produced by the algorithm of the present invention
  • Figure 4 represents a branch of the tree of Figure 3.
  • the probabilistic power domain algorithm generates the invariant measure of an IFS with probabilities, i.e. it decodes an image. It is therefore an alternative to the greyscale photo copy algorithm and the random iteration algorithm described in the last section.
  • the root of the tree is the unit square X on depth zero.
  • each of the parallelogram in the tree a weight.
  • the unit square X is given weight one. This weight is distributed among the parallelograms on the next level by giving each f i X weight p i .
  • the weight p t is then redistributed among the children of f i X by giving f i f j X weight p i p j . Note that
  • Each branch of the tree eventually shrinks to a pixel.
  • n [- log r/ logs] + 1, where [a] is the greatest integer less than or equal to a, all parallelograms on level n have already shrunk to pixels, i.e.
  • ⁇ * is computed.
  • the value ⁇ * ⁇ z) for a pixel z is given simply by the sum of the weights of all the leaves which are represented by z.
  • the algorithm therefore traverses the finite tree in some specified way in order to find the leaves and their weights. Then, for each pixel the weights of the leaves represented by that pixel are summed up to give the total weight of each pixel, which determines the normaU ed measure ⁇ " .
  • Figure 5 illustrates more graphically what is happening at each branch of the tree.
  • the individual mappings (three in this case) are applied to create the three large parallelograms f 1 X, f 2 X and f 3 X.
  • Each of the three mappings is then applied again to each of the three parallelograms.
  • the repeated operation is shown only for the mapping f 1 .
  • Applying the mapping f 1 to the three parallelograms f 1 X, f 2 X and f 3 X produces the three smaller parallelograms f 1 f 1 X, f 1 f 2 X and f 1 f 3 X.
  • the procedure is repeated, down every branch, until every parallelogram has shrunk to the size of a single pixel.
  • T x r array of real numbers ⁇ * (z kl ) gives the weights assigned to the pixels z kl for 1 ⁇ k,l ⁇ ⁇ . It will be appreciated that the algorithm does not make use of the positions of the parallelograms at all: merely the positions of their central points, and the bounds on the length of their sides.
  • the probabilistic power domain algorithm by efficiently traversing the finite tree, uses the least number of computations to make the best possible digitised approximation to the invariant measure of an IFS with probabiUties.
  • [a] is the greatest integer less than or equal to a.
  • a simple calculation shows that at each stage of computation 10 arithmetic operations have to be performed. It follows that the total number of computations performed by the algorithm before it stops is between 10(N + N 2 + . . . + N h ) and 10( ⁇ ' + N 2 + . . . + ⁇ " 1' ), i.e between and .
  • the probabiUstic power domain algorithm Since one cannot have a complexity analysis for the greyscale photo copy algorithm and the random iteration algorithm (as in both cases one has to specify the number of iterations by trial and error), we cannot make a direct analytical comparison between the probabiUstic power domain algorithm and the other two. However, on all inputs we have tried, the probabiUstic power domain algorithm is, often up to several times, faster than the random iteration algorithm in producing a good quality image.
  • An IFS ⁇ f i ,f i , . . . , f N generates a black and white image which is in fact the support of the invariant measure of (f 1 , , f 2 , . . . , f N ; p 1 ,p 2 , . . . , P N ) for any set of probabilities P i assigned to the maps f i .
  • This black and white image can be obtained using the photo copy algorithm [BH93].
  • the Plotkin power domain algorithm decodes an IFS to give a black and white image. The same tree given in the section 2 is used. This time however there are no weights assigned to the nodes. The algorithm simply determines the leaves of the tree and plots the corresponding pixels on the screen. Again the algorithm terminates without needing to specify a number of iterations.
  • a pixel forms part of the image if that pixel corresponds to at least one of the leaf nodes in the tree. Accordingly, the pixel can be plotted immediately that a leaf node is reached.
  • the pseudo-code for the Plotkin power domain algorithm is similar to that set out at section 2.2.1 above, except that all reference to node weights is omitted.
  • the pixel may be plotted at the Une currently devoted to calculating the total pixel weight, so that the final plotting step (step 3) may be omitted.
  • Both the probabiUstic power domain algorithm and the Plotkin power domain algorithm share a number of advantages: they terminate in finite time on any digitised screen without the necessity of fixing the number of iterations in advance; there is a simple complexity analysis (given in section 2.3 above); and the algorithms produce good quality images normally several times faster than in the prior art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A method of decompressing a greyscale or colour image stored in Iterated Function Systems (IFS) format with probabilities comprises applying each transformation in turn to the unit square to produce a series of parallelograms, each of which is contained within the unit square. Each transformation is then applied to each of the parallelograms, to produce a second level of parallelograms, all of which will be smaller than and contained within the parent parallelograms. The process is repeated until the parallelograms shrink to the size of a single pixel. The colour or intensity of each pixel of the decompressed image can then be calculated as a function of the number of parallelograms which have shrunk onto that particular pixel, weighted according to a parallelogram weighting factor. Black and white images can be accommodated by ignoring the probabilities.

Description

POWER DOMAIN ALGORITHMS FOR FRACTAL IMAGE
DECOMPRESSION The present invention relates to image decompression, and particularly to the decompression of images stored in Iterated Function System (IFS) format, either with or without probabilities.
Brief details of IFS format, and how it is currently used as a convenient format in which to store compressed images are given in section 1 below. As will be described in that section, the two known algorithms which are commonly used for decompressing an image stored in IFS format, the random iteration algorithm and the greyscale photocopy algorithm, are both relatively slow. In addition, both operate by iterative processes, and it is normally impossible to decide in advance how many iterations will be required to produce a reasonable decompressed image at a particular resolution. The number of iterations to be used is either decided arbitrarily in advance, or (when the decompression is being carried out interactively) the user simply watches the image building up on a computer screen and stops the process when the image appears subjectively acceptable. Neither of these is of course ideal. Finally, because of the number of iterations is indeterminate, it is not possible to provide an analytical complexity analysis to determine how long it will take each of the prior art algorithms to decompress a given image to a particular resolution.
It is an object of the present invention at least to alleviate the above disadvantages of the prior art algorithms.
According to the present invention there is provided a method of decompressing a compressed image stored in IFS format having N maps f1... fN. the image to be decompressed to a resolution corresponding to a pixel size q, the method comprising:
(a) selecting a square area A such that each of the N maps f1... fN acts to transform A into itself;
(b) selecting a root point or points x0 within the area A;
(c) (i) applying each of the maps f1... fN to the or each point x0 to
produce a first-level series of nodes f1x0... fNx0 representing the ends of branches of an emerging tree;
(ii) determining for each first-level node the maximum size of
the parallelogram within A in which the node must fall, and defining the node as a leaf node if the maximum size is q or less;
(d) (i) applying each of the maps f1... fN to each of the first level
nodes f1X0... fNX0 to produce a second-level series of nodes fl flx0...
fNfNX0 representing the ends of branches of the emerging tree;
(ii) determining for each second-level node the maximum size of the parallelogram within A in which the node must fall, and defining the node as a leaf node if the maximum size is q or less;
(e) repeating (c), incrementing the level each time, until all of the
nodes are defined as leaf nodes;
(f) calculating each pixel of the decompressed image as a function of
those leaf nodes which correspond to the said pixel; and
(g) outputting or storing the decompressed image.
It will of course be appreciated that there are various ways in which a tree of this type can be built up and/or searched for leaf nodes, and it is not to be understood in the main claim that clauses (c) to (g) are separate steps which will necessarily be carried out sequentially. For example, although it possible that clause (f) may be carried out wholly after the tree has been constructed, it is also possible that, at least in some embodiments, each pixel of the decompressed image can be calculated immediately that the first leaf node corresponding to that pixel has been found. That is the case with the Plotkin power domain algorithm, in which the IFS generates a black and white image. In that embodiment, an individual pixel will be set to white (if the ground is black) or black (if the ground is white) immediately that a leaf node is located which corresponds to that particular pixel. That particular pixel may then be output or stored (step (g) in Claim 1) immediately, without waiting for the entire tree to be built up.
In a similar manner, the tree will not necessarily be built up by applying clauses (c) to (e) strictly sequentially. In one preferred algorithm, the tree is searched branch by branch, rather than level by level, with each branch being followed until it results in a leaf node. The algorithm may then back-up the tree to the next closest branching point, and follow the branch on from there until it reaches the next leaf node. The tree may be sequentially searched by this method until all leaf nodes have been found.
In an alternative embodiment, the tree is searched level by level; in other words the first level of nodes is built up from the root point, and each node on that level is checked to see whether it is a leaf node. The algorithm then proceeds sequentially through all of the nodes on the first level, constructing branches from each of those nodes to the nodes on the second level. As the second level nodes are being built up, the algorithm systematically checks whether each newly created node is a leaf node.
Preferably, the algorithm which constructs the tree does not form any additional nodes onto those nodes which have been identified as leaf nodes.
The tree may be conveniently searched by a recursive algorithm, for example an algorithm having a recursively-defined procedure which takes as input a given node and produces as output all those next-level nodes which hang from the input node.
The decompressed image may be in black and white, with each pixel in the decompressed being white if one or more of the leaf nodes corresponds to that pixel, and black otherwise; or vice versa. Such an image would normally be produced from a compressed image stored in IFS format without probabilities. However, a black and white image may be obtained from an IFS image with probabilities, simply by ignoring the probabilities. That will represent the support of the coloured or greyscale image.
An image stored in IFS format with probabilities may be decoded as a greyscale image or as a coloured image (with suitable mapping). In such a case, each node in the tree has a node weight associated with it, each node weight being the product of the probabilities corresponding to all of the maps that make up the said node. Preferably, the initial or root node corresponds to the unit square, and that is given weight 1. All of the nodes at any particular level have a cumulative weight of 1. With such an arrangement, the colour or density of each pixel in the decompressed image may be a function of the sum of the node weights of all the leaf nodes corresponding to the said pixel.
The maximum size of the parallelogram corresponding to each individual node may be determined by calculating, for example, the maximum length that the side of the parallelogram can take. Preferably, the maximum size is determined according to the contractivity of the various transformations making up the parallelogram. For example, the maximum size of the parallelogram may be taken as the product of the contractivities of the transformations which make up the corresponding node.
In using the method of the present invention, it is not necessary to determine how all of the co-ordinates of the parallelograms transform each time a mapping is applied. Instead, the repeated mappings may be applied to a single point on or within the parallelogram. Since each parallelogram must eventually shrink to the size of a single pixel, it can be guaranteed that applying the transformations repeatedly to a single point within the parallelogram will ensure that the position of that point eventually converges to the same pixel.
A convenient choice for the root point is the centre of the area A, within which the image is to be reconstructed. Because of the nature of the transformations, each transformation applied to the central point Xo will map that point to the exact centre of the parallelogram which results from applying that paπicular mapping to A. Conveniently, the area A is taken to be the unit square, by suitable scaling if necessary (ie the area A may be rescaled to the unit square between (a) and (b) of Claim 1). The individual pixel size may then be taken as 1/r, where r is the screen resolution at which the decompressed image is to be plotted. Alternatively, one could of course equivalently take the pixel size to be one unit square, in which case the area A would have side r.
Conveniently, at the first stage in creating the tree, each of the maps f1...fN are applied to the same point X0. Alternatively (although not in the preferred embodiment) one could start with several different root points, all lying within the initial area A, with the first mapping being applied to the first point, the second to the second point and so on. Since all points on the initial area A must converge to the same pixel, for a given leaf node, choosing several root points will not prevent the algorithm from converging.
It is not necessary for the tree or any part of it to be built up sequentially. Given a suitable parallel algorithm, and parallel processing hardware, some parts of the tree may be built up at the same time as other parts, thus increasing the decompression speed.
1 Fractal Image Compression Using Measures
Following the original work of Barnsley and his colleagues in the mid 19S0's which was presented in [BD85, BJRS88, BS8S, BarSS], fractal image compression has in the course of the past few years become an important area of research and development. It has been competing successfully with its main rival, JPEG, an image compression technique based on the discrete cosine transform. Compression ratio of several hundreds can be achieved by the fractal method as compared with compression ratios of 10 to 20 by the JPEG.
There are different fractal image compression techniques. One, which concerns us here directly, uses measures and iterated function systems with probabilities. It has been described in [BH93, page 113]. In this section, we will briefly review this method. In the next section we will present a new algorithm for decompression.
1.1 Images as Measures
In fractal image compression, one seeks to encode and decode a colour or a greyscale digitised image on the computer screen. Using a colour map, a colour or a greyscale image can be presented by assigning a number to each pixel indicating its brightness or intensity. We can think of this number as the weight of the pixel. Any section of the image will then have a total weight equal to the sum of the weights of its pixels. It is technically convenient to rescale the weights of pixels so that the screen as a whole, consisting of say r x r pixels (where r is the resolution of the screen), has unit weight in total. In this way, we say that the image is represented by a normalised measure on the screen, which we can regard as the unit square. Conversely any normalised measure on the screen, i.e any distribution of weights on pixels which add up to one corresponds to an image. We can therefore identify (colour or greyscale) images on the screen and normalised measures on the unit square. Given an image, the set of all pixels which have non-zero weights is called the support of the image.
The encoding of an image is achieved by doing a self-tiling of the image using a family of weighted aflϊne transformations of the plane. We will describe this in more detail in section 1.3.
1.2 Iterated Function Systems with Probabilities
An affine transformation of the plane is a combination of a rotation, a rescaling, a sheer and a translation in the plane. Any affine transformation f : R2→ R2 of the plane has the form
Figure imgf000008_0001
where (x, y)€ R2 is any point on the plane. It is convenient to express this in matrix notation as
The matrix
Figure imgf000008_0002
is the linear part of the transformation; it is the combination of a rotation, a rescaling and a sheer. The vector
Figure imgf000009_0001
is the translation part of the transformation.
An affine transformation is contracting if there is a number s with 0≤ s < 1 such that the distance between any two points (x1, y1) and (x2,y 2) on the plane is contracted by at least a factor s, i.e.
Figure imgf000009_0002
For a contracting transformation f as above, the least number s with this property is called the contractivity of the affine transformation and is given by
Figure imgf000009_0003
where a = (a2 + c2)/2, β = (b2 + d2)/2, and γ = ab + cd.
An Iterated Function System (IFS) on the plane is given by a finite collection of contracting affine transformations fi ( i = 1, 2, . . . , N) of the plane. An IFS with probabilities is an IFS such that each fi is assigned a probability pi with 0 < pi < 1 and p1 + p2 + . . . + PN = 1. Given an IFS with probabilities
( f1, f2, .. . , fN; P1 , p2, . . .. pN)
we can store the coefficients of the maps /,• and the probabilities in a table, which gives the code of the IFS with probabiUties. For example the following table is the code of an IFS with probabilities with four maps (N = 4).
f a b c d 9 h P
1 0.76 0.00 0.00 0.76 0.12 0.25 0.72
2 0.00 0.00 0.00 0.26 0.51 -0.00 0.01
3 0.20 0.26 -0.23 0.22 0.41 0.24 0.13
4 -0.15 -0.28 -0.26 0.24 0.59 0.24 0.24
An IFS with probabilities determines a unique normalised measure, i.e. a unique image on the plane, called the invariant measure of the IFS. Therefore, we can say that the IFS encodes this measure or image. The existence and uniqueness of the invariant measure is shown in [Bar88, page 336]. For example the IFS with probabilities with the code given in the table above determines the image of a fern shown in Figure 1. The picture on the left is the actual greyscale image, whereas the picture on the right gives the support of the image in white.
We will now describe the two prior art algorithms which have been used up to now to generate, i.e. to decode, the image corresponding to an IFS with probabilities. These algorithms should be compared with the new algorithm presented in Section 2.
We refer the reader to [Bar88, BH93] for the theoretical basis of the following algorithms. We can assume, by a change of coordinates and rescaling if necessary, that each contracting affine transformation fi (i = 1, 2, . . . , N) maps the unit square, denoted by X , into itself (i.e. each transformation maps the unit square to a parallelogram whose sides fall entirely within the unit square). Any point x in the unit square is represented by its closest pixel zkl, for some k and l with 1≤ k, l≤ τ on the screen, where τ is the resolution. 1.2.1 The Greyscale Photo Copy Algorithm
Given an IFS with probabilities
(f1 , f2 , . . . , fN; p1 , p2, . . . , pN )
the greyscale photo copy algorithm is equipped with an operator M, called the Markov operator, which given any image μ as input produces a new image M(μ) as output. Suppose μ is a digitised image, i.e. for each pixel zkl (l≤ k, l≤ τ) we have μ(zkl)≥ 0 with
Figure imgf000010_0004
,
Then the new image M(μ) is obtained, by assigning the following weight to pixel zkl:
Figure imgf000010_0003
where fi -1 ( zkl) is the set of all pixels which are mapped to zkl by fi.
The invariant measure μ* of the IFS is in fact the unique fixed point of M, i.e. μ* is the unique normalised measure which satisfies M(μ* ) = μ*. Moreover for any initial image μ the sequence of iterates of M , i.e.
μ, M(μ), M2 (μ), M (μ), . . . , Mn(μ), .. .
always tends to μ*. This means that if n is sufficiently large, the image represented by Mn(μ) is visually close to μ*.
Therefore the algorithm starts with an initial digitised image μ and repeatedly applies M until a convergence to an image takes place. This limiting image is then the required image: It is the digitised approximation to μ*. In most practical embodiments, the initial image is taken to be a uniformly grey image.
The algorithm eventually produces a good approximation to μ* for large n. However, as the above formula for M(μ) shows, calculation of each new iterate of M is computational intensive and it requires a significant amount of memory (as it needs to store 2r2 numbers): beyond the capacity of a PC. Furthermore, there is no criteria defining how many iterations be performed, and the number n of iterations has to be fixed in advance by trial and error depending on the IFS and the resolution of the screen.
1.2.2 The Random Iteration Algorithm
The random iteration algorithm generates the image corresponding to an IFS with probabilities
Figure imgf000010_0002
as follows. Initially, a point q0 of the unit square is chosen. Then an integer between 1 and N is chosen at random with p, being the probability of choosing i. Suppose ii is chosen. This determines a new point namely q1 = ' fi, (q0)- We then repeat this procedure to choose, say, t2 and put
Figure imgf000010_0001
h . In this way we find a sequence of points
Figure imgf000011_0005
where
Figure imgf000011_0004
When n is large enough, the distribution of the first n terms of the above sequence on the screen approximates the required image μ', i.e. the invariant measure of the IFS. More precisely, let L(n, zkl) be the total number of the first n terms of the above sequence which are represented by the pixel zkt. Then we have
Figure imgf000011_0002
i.e. for large n, the fraction
Figure imgf000011_0003
is an approximation to μ(zkl).
Like the greysale photo copy algorithm, to use the random iteration algorithm one has to fix a number n of iterations in advance, which for each IFS with probabiUties is found by trial and error for any given resolution of the screen. Furthermore, the selection of the integer i from 1,2, . .. , N with probability pi is quite time consuming as it takes about log2 N computational steps. Last but not least, the algorithm, as it is random, may fail, during the n iterations specified, to visit some of the pixels which have non-zero μ* value, i.e. which in fact lie on the support of the original image.
1.3 Compression via Self-tiling
We now describe how compression of an image is done, i.e. given a digitised image μ on the screen how to find an IFS with probabilities
( f1 , f2, . . . , fN; p1 , p2 , . . . , pN)
which represents that image closely. Details of the method can be found in [BH93, page 113].
The aim is to find N contracting affine transformations fi and probabiUties pi (i = 1, 2, . . . , N) such that the image represented by the measure
Figure imgf000011_0001
where M is the Markov operator, is visually close to the original image μ. Suppose for now that we have achieved this, i.e. assume we have an IFS with probabilities
(h , h . - - - , fN . Pu P2, - - - , PN)
such that M(μ) w μ. Since the unique invariant measure μ* of the IFS satisfies
M(μ* ) = μ*, one can infer that μ* as μ. This means that the invariant measure of the
IFS is close to the original image, in other words the above IFS with probabilities gives a good encoding of μ. Therefore, we have to find
(f1 , f2, . . . , fN; P1 , P2, . . . , pN)
such that
Figure imgf000012_0001
Each term piμ o fi -1 in the above sum represents a copy of the original image transformed by the map /, and attenuated in brightness by the factor pi. Therefore the above equation represents a self-tiling, or a collage, of μ. Therefore, one proceeds by first assuming all probabiUties are zero except pl. Then h and pi are chosen such that P1μ o f1 -1 approximates a part of the image μ. Next p2 is allowed to be non-zero, and f2 and p2 are chosen so that p1μ o f1 -1 -+ p2μ o f2 -1 approximates a larger part of the image. The procedure is repeated until the whole image μ is covered by such tiles. Then the probabilities are normalised so that they add up to one. The resulting ( f1, f2, . . . , fN;p1,p2, . . . ,pN) gives an encoding of the image μ.
This technique for image compression was used originally in a software developed by M. F. Barnsley, A. Sloan and L. Reuter. (See [Bar88, page 337]). Their system contains two subsystems, Collage and Seurat. Collage finds the IFS code of a given image using the above method. Seurat then decodes the image by starting with the IFS code; for this it uses the random iteration algorithm described previously. Another software, VRIFS [BH93, page 114], using a more sophisticated version of this technique, has also been developed by M. F. Barnsley and A. Sloan; see the above book for more details and colour plates.
In order to i llustrate the idea we give a very primitive example. Figure 2 shows a fern (to be considered as a black and white image) on the left, a collage of four tiles with the following IFS code in the middle, and the decompressed image on the right.
f a b c d g h p
1 -0.55 0.00 0.00 0.55 0.70 0.13 0.24
2 0.75 0.00 0.00 0.75 0.14 0.25 0.46
3 0.50 0.12 0.00 0.60 0.24 0.09 0.25
4 0.20 0.008 0.00 0.30 0.35 0.04 0.05
The invention may be carried into practice in a number of ways, and two specific algorithms will now be described, the probabiUstic power domain algorithm (section 2) and the Plotkin power domain algorithm (section 3), by way of examples, with reference to the drawings, in which:
• Figure 1 shows an image created from an IFS with probabiUties. along with the corresponding support:
• Figure 2 shows an image to be encoded, the tiling for an IFS code, and the corresponding decompressed image;
• Figure 3 illustrates in schematic form a tree which is produced by the algorithm of the present invention;
• Figure 4 represents a branch of the tree of Figure 3; and
• Figure 5 illustrates schematically the parallelograms making up the tree. 2 The Probabilistic Power Domain Algorithm
The probabilistic power domain algorithm generates the invariant measure of an IFS with probabilities, i.e. it decodes an image. It is therefore an alternative to the greyscale photo copy algorithm and the random iteration algorithm described in the last section.
2.1 Description of the Algorithm
We will now describe the algorithm. Suppose (f1, f2, . . . , fN ; p1, p2, . . . , pN) is an IFS with probabilities. We want to generate the image μ* corresponding to the IFS by computing μ*(z) for each pixel z.
Assume that the contractivity of fi is si (0≤ s i < 1), which can be calculated in terms of the coefficients of fi as given in the previous section. Let s be the maximum value of si for i = 1, . . . , N. i.e. s = maxl≤i≤N si. Then s also satisfies 0 < s < 1. As before, we can assume that each contracting affine transformation fi (i = 1, 2, . . ., N) maps the unit square X into itself.
Consider the tree, shown in Figure 3, which for convenience has been depicted upside down.
The root of the tree is the unit square X on depth zero. The immediate descendents or children of X, situated on depth one, are the images of X under the maps f1, f2, . . . , fN, which we denote simply by f1X, f2X, . . .,fNX. Since an affine transformation sends parallel Unes to parallel Unes, each fiX is a parallelogram, contained in X by assumption; and its sides are at most of length si. Note that these parallelograms may intersect. On the next depth, the children of fiX (for each i = 1. .. . , N) are the images of the depth one parallelograms, fi X, f2X, . . . , fN X, under the map fi. These are denoted therefore by fi flX, fif2X-, . . . , fifNX . Each fifjX is a smaller parallelogram with sides at most of length SiS,- and is contained in fiX. Similarly the next levels of the tree are constructed. A typical node of the tree on depth level n is a small parallelogram denoted by
Figure imgf000013_0007
, with sides at most of length
Figure imgf000013_0006
n, which is contained in its parent parallelogram
Figure imgf000013_0003
.
We give each of the parallelogram in the tree a weight. The unit square X is given weight one. This weight is distributed among the parallelograms on the next level by giving each fiX weight pi. The weight pt is then redistributed among the children of fiX by giving fifjX weight pipj. Note that
PiP1 + PiP2 + . . . + PiPN = Pi(P1 + P2 + . . . Pn ) = Pi so that the total weight is always preserved. Generally, the parallelogram
Figure imgf000013_0004
is assigned the weight Pi,p,2•• - PJ„.
Each branch of the tree eventually shrinks to a pixel. In fact, if
Figure imgf000013_0005
is of the order of the size of a pixel, i.e. of the order of 1/r where r is the resolution of the screen, then we can identify the node
Figure imgf000013_0001
with a pixel, and the branch shown in Figure 4 then terminates at this node, since a child of any pixel is clearly the same pixel. This terminating node, represented by that pixel, is therefore a leaf of the tree and it has weight
Figure imgf000013_0002
. Clearly when n = [- log r/ logs] + 1, where [a] is the greatest integer less than or equal to a, all parallelograms on level n have already shrunk to pixels, i.e. the tree will have finite depth n and all the leaves of tree will have depth less than or equal to n. We can now explain how μ* is computed. The value μ*{z) for a pixel z is given simply by the sum of the weights of all the leaves which are represented by z.
The algorithm therefore traverses the finite tree in some specified way in order to find the leaves and their weights. Then, for each pixel the weights of the leaves represented by that pixel are summed up to give the total weight of each pixel, which determines the normaU ed measure μ" .
Clearly then, the algorithm terminates, without needing to specify any fixed number of iterations as one has to do in the case of the other two prior art algorithms.
Figure 5 illustrates more graphically what is happening at each branch of the tree. Starting with the unit square X, the individual mappings (three in this case) are applied to create the three large parallelograms f1X, f2X and f3X. Each of the three mappings is then applied again to each of the three parallelograms. For simplicity, the repeated operation is shown only for the mapping f1. Applying the mapping f1 to the three parallelograms f1X, f2X and f3X produces the three smaller parallelograms f1f1X, f1f2X and f1f3X. The procedure is repeated, down every branch, until every parallelogram has shrunk to the size of a single pixel.
Note in Figure 5 that all of the three larger parallelograms fall entirely within the unit square X, and all of the three smaller parallelograms, obtained by applying the mapping f1 to the three larger parallelograms, lie within the parallelogram f1X.
2.2 An Implementation of the Algorithm
An efficient implementation of this algorithm on a PC has been carried out in [Kak93, Nik93]. A pseudo-code for this implementation is given at section 2.2.1 below.
The algorithm makes use of the fact that it is not necessary to determine the vertices of all the parallelograms, such as those shown in Figure 5. It will be noticed that if x0 is taken to be the mid-point of X, so that x0 = (1/2, 1/2) if X is the unit screen, then for any given mapping x0 will be mapped to the exact centre of the corresponding parallelogram. So, for example, as shown in Figure 5, x0 is the central point of the unit square X , and on the appUcation of the mapping fx . that goes to f1x0, which is the central point of the parallelogram f\X. Applying the h mapping again moves the point to f1f1x0, which is the central point of the parallelogram f1f1X. As the parallelogram shrinks, the point always remains at its centre. Accordingly, when the sides of the parallelogram have shrunk to the size of a pixel, one can determine which pixel has been converged to merely by determining the position of the parallelogram central point To put it more precisely, if x0 is taken as the mid-point of X, then
Figure imgf000014_0001
, „ is always the mid point of If the branch actually terminates at this node, then the pixel represented by
Figure imgf000014_0002
gives us the leaf of the branch.
The pseudo-code below makes use of a recursive procedure "PRODUCE" to build up the tree. The procedure takes as its arguments
( j, [x1, X2, . . .. xN], [d1, d2, . . . , dN], [w1, w2, wN] ) where j is one of 1, 2, . . ., N, except that initially it is set to zero. [x1, x2, . . . , xN] is an array of pairs of real numbers representing the co-ordinates of the central points of N parallelograms of the tree. At each step, the parallelograms are recursively defined such that the use of memory is minimised. The upper bounds for the lengths of the sides of these parallelograms and their weights are represented respectively by the two arrays [d1,d2,...,dN] and [w1,w2,...,wN] of real numbers. Finally, the
T x r array of real numbers μ*(zkl) gives the weights assigned to the pixels zkl for 1≤ k,l≤ τ. It will be appreciated that the algorithm does not make use of the positions of the parallelograms at all: merely the positions of their central points, and the bounds on the length of their sides.
2.2.1 A Pseudo-code for the Algorithm
0. Start
1. Compute all contractivity factors si
2. Call procedure produce(0,[x0,x0, ...,x0], [1, 1,..., 1],[1,1,...,1])
3. For k = 1,...,r and l = 1,...,r, plot pixel zkl with weight μ*(zkl).
4. End.
where:
procedure produce(j, [x1, x2, ... , XN], [d1, d2,..., dN], [w1, w2,..., wN]){
for i= 1,...,N do {
x'i = fi(xj); new position
d'i = si *dj] bound on length of side
wi' = Pi * Wj ; weight of this node
}
for i = 1,...,N do {
if (d'i > 1/r) then do
call procedure produce(i, [x1', x2', ... , x'N], [d1', d'2,..., d'N], [w[, w'2,..., wN']) else
μ*(xj) = μ*(xi) + w'i; calculate total pixel weight
}
}
2.3 Speed and Complexity
The probabilistic power domain algorithm, by efficiently traversing the finite tree, uses the least number of computations to make the best possible digitised approximation to the invariant measure of an IFS with probabiUties.
We can obtain a simple complexity analysis for the algorithm. If s = maxl≤i≤N si, and s' = minl≤i≤N si, then it is easy to see that the height of the tree is between h and h! with
and
Figure imgf000015_0001
Figure imgf000015_0002
where [a] is the greatest integer less than or equal to a. A simple calculation shows that at each stage of computation 10 arithmetic operations have to be performed. It follows that the total number of computations performed by the algorithm before it stops is between 10(N + N2 + . . . + Nh ) and 10( Λ' + N2 + . . . + Λ"1' ), i.e between and .
Figure imgf000016_0001
Figure imgf000016_0002
Since one cannot have a complexity analysis for the greyscale photo copy algorithm and the random iteration algorithm (as in both cases one has to specify the number of iterations by trial and error), we cannot make a direct analytical comparison between the probabiUstic power domain algorithm and the other two. However, on all inputs we have tried, the probabiUstic power domain algorithm is, often up to several times, faster than the random iteration algorithm in producing a good quality image.
Finally we point out that the probabiUstic power domain algorithm has even more advantages for decompressing natural objects. Note that since for a given N the total number of computations is at most about 10Nh, the smaller the height h the faster the computational process will be. But to model natural objects with IFS with probabiUties, one needs a relatively large number N of transformations fi which have small contractivity factors si, i.e. the IFS will have a small contractivity s = maxl≤i≤Nsi. This means that the height of the tree h = [- log r/ logs] + 1 will be small and, therefore, the speed of convergence fast. Consequently, the use of the probabilistic power domain algorithm, in comparison with the other two algorithms is even more effective in the case of natural objects.
3 The Plotkin Power Domain Algorithm
An IFS {fi ,fi , . . . , fN) generates a black and white image which is in fact the support of the invariant measure of (f1, , f2, . . . , fN; p1 ,p2 , . . . , PN) for any set of probabilities Pi assigned to the maps fi. This black and white image can be obtained using the photo copy algorithm [BH93].
In a similar way to the probabiUstic power domain algorithm, the Plotkin power domain algorithm decodes an IFS to give a black and white image. The same tree given in the section 2 is used. This time however there are no weights assigned to the nodes. The algorithm simply determines the leaves of the tree and plots the corresponding pixels on the screen. Again the algorithm terminates without needing to specify a number of iterations.
When a black and white image is to be generated, one knows that a pixel forms part of the image if that pixel corresponds to at least one of the leaf nodes in the tree. Accordingly, the pixel can be plotted immediately that a leaf node is reached. The pseudo-code for the Plotkin power domain algorithm is similar to that set out at section 2.2.1 above, except that all reference to node weights is omitted. In addition. the pixel may be plotted at the Une currently devoted to calculating the total pixel weight, so that the final plotting step (step 3) may be omitted.
Both the probabiUstic power domain algorithm and the Plotkin power domain algorithm share a number of advantages: they terminate in finite time on any digitised screen without the necessity of fixing the number of iterations in advance; there is a simple complexity analysis (given in section 2.3 above); and the algorithms produce good quality images normally several times faster than in the prior art. References
[Bar88] M. F. Barnsley. Fractals Everywhere. Academic Press, 1988.
[BD85] M. F. Barnsley and S. Demko. Iterated function systems and the global construction of fractals. The proceedings of the royal society of London, A399:243-275, 1985.
[BH93] M. F. Barnsley and L. P. Hurd. Fractal Image Compression. AK Peters,
Ltd, 1993.
[BJRS88] M. F. Barnsley, A. Jacquin, L. Reuter, and A. D. Sloan. Harnessing chaos for image synthesis. Computer Graphics, 1988. SIGGRAPH proceedings.
[BS88] M. F. Barnsley and A. D. Sloan. A better way to compress images. Byte magazine, pages 215-223, January 1988.
[Kak93] A. Kakos. Stochastic Learning Automata, IFSs and the generation of
Fractal Objects. Master's thesis, Imperial College, London, September 1993. forthcoming.
[Nik93] N. Nikolaou. Fractal Image Processing on the Digitised Screen. Master's thesis, Imperial College, London, September 1993. forthcoming.

Claims

CLAIMS:
1. A method of decompressing a compressed image stored in IFS format having N maps f1... fN, the image to be decompressed to a resolution corresponding to a pixel size q, the method comprising:
(a) selecting a square area A such that each of the N maps f1... fN acts
to transform A into itself;
(b) selecting a root point or points x0 within the area A:
(c) (i) applying each of the maps f1... fN to the or each point x0 to
produce a first-level series of nodes f1X0... fNXo representing the ends of branches of an emerging tree;
(ii) determining for each first-level node the maximum size of
the parallelogram within A in which the node must fall, and defining the node as a leaf node if the maximum size is q or less;
(d) (i) applying each of the maps f1... fN to each of the first level
nodes f1x0... fNX0 to produce a second-level series of nodes f1f1x0...
fNfNx0 representing the ends of branches of the emerging tree;
(ii) determining for each second-level node the maximum size of the parallelogram within A in which the node must fall, and defining the node as a leaf node if the maximum size is q or less;
(e) repeating (c), incrementing the level each time, until all of the
nodes are defined as leaf nodes;
(f) calculating each pixel of the decompressed image as a function of
those leaf nodes which correspond to the said pixel; and
(g) outputting or storing the decompressed image.
2. A method as claimed in Claim 1 in which a single root point X0 is chosen at (b).
3. A method as claimed in Claim 1 or Claim 2 in which the maps f1... fN are applied at (d)(i) only to those first-level nodes which have not previously been defined as leaf nodes.
4. A method as claimed in any one of Claims 1 to 3 in which the tree is searched level by level, the clauses (b) to (e) of Claim 1 being sequential.
5. A method as claimed in any one of Claims 1 to 3 in which the tree is searched branch by branch, each branch being followed until it results in a leaf node.
6. A memod as claimed in Claim 4 or Claim 5 in which the tree is searched using a recursive algorithm.
7. A method as claimed in any one of the preceding claims in which the decompressed image is black and white, each pixel in the decompressed image being white if one or more leaf nodes corresponds to that pixel, and black otherwise; or vice versa.
8. A method as claimed in any one of the preceding claims in which the compressed image is stored in IFS format with probabilities for a grey-scale or coloured decompressed image, each node in the tree having a node weight associated with it, the node weight being the product of the probabilities of the maps making up the said node.
9. A method as claimed in Claim 9 in which the colour or density of each pixel in the decompressed image is a function of the sum of the node weights of all the leaf nodes corresponding to the said pixel.
10. A method as claimed in any one of the preceding claims in which the root point x0 is chosen to be at the centre of the area A.
11. A method as claimed in any one of the preceding claims in which the maximum size of the parallelogram is determined at (c)(ii) and (d)(ii) as the product of the contractivities of the maps making up the corresponding node.
12. A method of decompressing a compressed image stored in IFS format, the decompressed image being black and white.
13. A method of decompressing a compressed image stored in IFS format with probabilities, the decompressed image being a greyscale image or a colour image.
14. A method as claimed in Claim 1 including rescaling the area A to the unit square between (a) and (b) of Claim 1.
PCT/GB1994/002000 1993-09-17 1994-09-14 Power domain algorithms for fractal image decompression WO1995008161A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU76199/94A AU7619994A (en) 1993-09-17 1994-09-14 Power domain algorithms for fractal image decompression

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9319314.2 1993-09-17
GB939319314A GB9319314D0 (en) 1993-09-17 1993-09-17 Power domain algorithms for fractal image decompression

Publications (1)

Publication Number Publication Date
WO1995008161A1 true WO1995008161A1 (en) 1995-03-23

Family

ID=10742177

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1994/002000 WO1995008161A1 (en) 1993-09-17 1994-09-14 Power domain algorithms for fractal image decompression

Country Status (3)

Country Link
AU (1) AU7619994A (en)
GB (1) GB9319314D0 (en)
WO (1) WO1995008161A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19641157A1 (en) * 1996-10-07 1998-04-16 Thomson Brandt Gmbh Procedure for checking the convergence in fractal image coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EDALAT A: "Dynamical systems, Measures and Fractals via Domain Theory", THEORY AND FORMAL METHODS 1993, 1993, pages 82 - 99 *
EDALAT A: "Power Domain Algorithms for Fractal Image Decompression", DEPARTMENT OF COMPUTING, IMPERIAL COLLEGE, vol. 93/44, 1993, LONDON, pages 1 - 12 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19641157A1 (en) * 1996-10-07 1998-04-16 Thomson Brandt Gmbh Procedure for checking the convergence in fractal image coding
US5978516A (en) * 1996-10-07 1999-11-02 Deutsche Thomson-Brandt Gmbh Method for checking convergence in fractal image coding

Also Published As

Publication number Publication date
AU7619994A (en) 1995-04-03
GB9319314D0 (en) 1993-11-03

Similar Documents

Publication Publication Date Title
US6144773A (en) Wavelet-based data compression
US7230616B2 (en) Bi-level iso-surface compression
AU632333B2 (en) Method and apparatus for processing digital data
US4941193A (en) Methods and apparatus for image compression by iterated function system
JP3378257B2 (en) System and method for nested split coding of sparse datasets
US6009435A (en) Progressive compression of clustered multi-resolution polygonal models
Guéziec et al. A framework for streaming geometry in VRML
Culik et al. Digital images and formal languages
JPH11502043A (en) Method and apparatus for compressing and expanding digital data using fractal transform
WO1996031975A1 (en) Method and system for representing a data set with a data transforming function and data mask
US5717787A (en) Method for data compression by associating complex numbers with files of data values
Albert et al. Digital image compression
US6714687B2 (en) Image encoding/decoding method, apparatus thereof and recording medium in which program therefor is recorded
Saupe et al. A guided tour of the fractal image compression literature
Fisher et al. Fractal image compression for mass storage applications
WO1995008161A1 (en) Power domain algorithms for fractal image decompression
Omari et al. Image compression based on mapping image fractals to rational numbers
JP3286213B2 (en) Method and system for compressing and decompressing geometric models
Jacobs et al. Fractal-Based Image Compression, II
Abdollahi et al. Lossless image compression using list update algorithms
Kopp Lossless wavelet based image compression with adaptive 2D decomposition
Lewiner et al. Simplicial Isosurface Compression.
Wall et al. A fractal block coding technique employing frequency sensitive competitive learning
Chong et al. Parallel implementation and analysis of adaptive transform coding
Wong et al. Fractal-based image coding with polyphase decomposition

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW NL NO NZ PL PT RO RU SD SE SI SK TJ TT UA US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE MW SD AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642