Method and system for managing multilevel caches of edge server in cdn
The multi-level buffer management method of Edge Server and systems technology field in a kind of CDN
The present invention relates to communication technical field, more particularly to a kind of CDN (Content Delivery Network, content distribution network)The multi-level buffer management method and system of middle Edge Server.
Background technology
CD technologies
The network congestion degree of current internet is increasingly serious, how to alleviate network congestion, improves the speed that user obtains information, a great problem as puzzlement numerous enterprises and service provider.It is on one side that this problem is solved by increase bandwidth, it is also influenceed by factors such as the Route Blockings and delay, the disposal ability of Website server in transmission way in addition, and the distance between these and visitor and Website server have close relationship.Therefore, even if after each ISP increases the network bandwidth, if the distance between visitor and website are farther out, the communication between them is equally needed by heavy routing forwarding and processing, and network delay is inevitable.And requirement more and more higher of the user to network system performance(The requirement of the reliability such as provided the response time of access, web site contents and service), therefore, CDN (Content Delivery Network, content distribution network)Technology is arisen at the historic moment.
CDN is also referred to as MDN (Media Delivery Network, Media Delivery Network network), a kind of value-added network on existing IP network infrastructures is built upon, is a kind of network architecture disposed in application layer.There is provided the manufacturer of CDN technical products has respective solution, the implementation of CDN operators is also different, but the function that CDN technologies are realized is identical, that exactly combines multiple spot load balancing, route or caching technology, utilize smart allocation technology, by place of the content according to visiting subscriber, multiple nodes are assigned to according to the principle accessed nearby.
In traditional IP networks, user's request is pointing directly at the original server based on the network address, and CDN business provides a service layer, supplement and extend Internet network, the content frequently accessed is promoted to user as far as possible, there is provided the new ability that flow forwarding is carried out based on content, route is directed on best server.
The initial period occurred in CDN technologies, it is mainly used for traditional Web static contents access and adds
Speed, with continuing to develop for Internet technology and multimedia technology, figure, audio, the proportion shared by Video service are increasing.The appearance of streaming media service to propagate multimedia messages in internet.Due to transmission Streaming Media take band is roomy, the duration is long, and the available network bandwidth of server end is limited, so even with high-grade server, its power system capacity only hundreds of clients, not with economic scale.In addition, if client computer away from server farther out, then the index such as delay, shake, bandwidth, packet loss during streaming media will not known more yet, it is impossible to ensure QOSo CDN technologies turn into it is a kind of it is effective transmission Streaming Media solution.
Fig. 1 is CDN Organization Chart, as shown in figure 1, central server 10 is by connecting connection multiple terminal devices 30 below multiple Edge Servers 20, each Edge Server;The Edge Server 20 nearer from user is stored in by the content for often accessing user, so as to improve the access to content speed of user.
Cache technologies
Cache technologies, i.e. caching technology, the design from processor earliest, i.e., using a kind of cache memory, come the unmatched technology of speed between solving CPU and hosting.In order to reduce Internet traffic, server load and user's delay, Cache technologies have been introduced into World Wide Web, in Web applications, and Cache, can be with significant improvement system performance by caching the output result of the page.Equally, CDN (Content Delivery Network) key technology is also Cache, by Hot Contents, including webpage, picture, audio frequency and video etc., it is cached on the Edge Server closer from user, so, on the one hand the load of central server can be mitigated, reduce backbone bandwidth to take, while accelerating the response to user, lift Consumer's Experience.
Cache key issue is how to improve Cache hit rate problem, i.e., the problem of how allowing the limited Cache spaces to meet more user access requests.Influence the factor of Cache hit rates a lot, including user access pattern, Cache replace algorithm, Cache space size, size of object of caching etc..User access pattern is the law study of user access activity, such as:The object that the access of user's web object is more conform with time local characteristicses, i.e. current accessed is likely to be accessed to again in the near future, for another example:The user that IPTV access may be more conform with 2/8 principle, i.e., 20% visits
Ask 80% content etc..It is according to which type of principle progress content replacement when Cache spaces are full, so cache replaces algorithm and is closely related with user access pattern that Cache, which replaces algorithm,.Another sunset is foretold, and Cache space size, the size of the object of caching can also influence Cache hit rates, and Cache is bigger, and hit rate is relatively higher, the bigger hit rate relative drop of cache object.The problem of Cache another problem is how to ensure uniformity, if the object cached in i.e. Cache is changed in source server, at this moment the object in Cache is exactly invalid, and the access request for user needs to obtain from source server again.
Cache typically uses the URI (uniform resource identifiers of object for the management of institute's cache object:Uniform Resource Identifier) or the value that is obtained to the URI after certain computing(Such as MD5 codes)It is managed as identifier, that is, receives the URI of user object request, directly or indirectly(After computing)Cache management structure in matched, match illustrate the object it is buffered, if cache object is effective, directly it can be asked by cache responses user, otherwise, it is necessary to object response user be obtained from source server, while caching the object.
Solid state hard disc technology
Solid state hard disc(Solid State Disk or Solid State Drive), also referred to as electronic hard disc or solid-state electronic disk are by control unit and solid state storage elements(DRAM or FLASH chip)The hard disk of composition.Because solid state hard disc does not have the rotating media of common hard disc, thus shock resistance is splendid.The storage medium of solid state hard disc is divided into two kinds, and one kind is to use flash memory(FLASH chip)As storage medium, another is as storage medium using DRAM.Solid state hard disc has following advantage compared to traditional mechanical hard disk:
In terms of access speed:SSD solid state hard discs are using flash memory as storage medium, and faster, and the tracking time is almost 0 to reading speed relative mechanical hard disk, and such speciality, can substantially speeding up operation system toggle speed and software toggle speed when as system disk.
In terms of anti-seismic performance:SSD solid state hard discs are due to completely without mechanical structure, so being less afraid of vibrations and impacting, without worrying because vibrations cause inevasible data degradation.
In terms of heating power consumption:SSD solid state hard discs are different from conventional hard, and the high speed in the absence of disc is revolved
Turn, so heating is also significantly lower than mechanical hard disk, and the power consumption of FLASH chip is extremely low, and this is for laptop user, it is meant that the increase of battery life.
For the use of noise:SSD solid state hard discs do not have disk body mechanism, the noise when sound and high speed sought in the absence of head arm rotates, so SSD operations will not produce noise completely.
But, although solid state hard disc performance is very tempting, advantage is also extremely more, but the shortcomings of price, capacity and the erasable number of times of limited digital independent are limited similarly it can not look down upon:
Writing speed problem:Writing speed is the bottleneck of current most of SSD solid-state hard disc products, also much not enough especially for the writing speed of small documents, and this speciality with flash chip in itself is relevant.
Service life problem:Flash chip has the life-span, and its average length of working life will be well below mechanical hard disk, and this brings certain risk to solid state hard disc as storage medium.
Cost performance problem:The price of current solid state hard disc still costly, is folded to every G unit prices and wants several and decuple conventional hard, is not that ordinary consumer can be born.
Edge memory technology
Cache contents are stored on Edge Server(Edge Server), the memory capacity of Edge Server is typically smaller than central server(Center Server) .The storage medium of Edge Server can use the storage medium of various properties and interface rate, such as internal memory, SSD, HDD.Internal memory, SSD storage mediums are fast with respect to access speed for HDD hard disks, and performance is high, but expensive;HDD hard disk performances are relatively low, but cheap.CDN service business considers for cost and the balanced of performance, may be used in mixed way from a variety of different storage mediums.
Edge Server uses multistage storage as the caching of central server, relates to the cooperation between multi-level buffer and the control to cache contents.General method is to use a large amount of cheap storage mediums as primary storage medium, using high-performance storage medium as the cache of primary storage medium, lifting system overall performance.The content of multi-level buffer is managed by multiple logic controllers, the inquiry and being an attempt to property of request for cache contents, and one-level is first inquired in the inquiry for content, if without inquiring two grades again.Exemplary flow is as shown in Figure 2.
Content between multi-level buffer can be mutual exclusion(Content on level cache and L2 cache is not
Together)Or comprising(Level cache content is a part for L2 cache content).Cooperation between multi-level buffer, is completed by the Signalling exchange between multiple logic controllers.When cache contents are moved to L2 cache from level cache, L2 cache controller is sent commands to by level cache controller and notifies its cache contents, while content is deleted from level cache.
The shortcoming of prior art:Use hybrid storage medium dull as the buffering scheme general management strategy of Edge Server at present, the migration of initial deployment and content for content is all dumb, it is impossible to give full play to the performance of multi-level buffer.
The content of the invention
The embodiments of the invention provide a kind of multi-level buffer management method of Edge Server in CDN and system, by flexible tactical management multilevel cache system, the performance of multilevel cache system is given full play to.
For achieving the above object, the embodiment of the present invention provides a kind of multi-level buffer management method of Edge Server in content distribution network CDN, and methods described includes:Content to be stored is obtained from central server;It is stored according to the caching that preset strategy is content assignment corresponding level to be stored, and by the content to be stored in the caching of corresponding level.
The embodiment of the present invention also provides a kind of multi-level buffer management method of Edge Server in CDN, and methods described includes:The content requests of user are received, the content information of acquisition is needed in the request comprising user;Judge that the content whether there is in the multi-level buffer;If it does not exist, then obtaining the content from central server, it is stored according to the caching that preset strategy is the content assignment corresponding level, and by the content in the caching of corresponding level;The user is given by the content transmission of storage.
The embodiment of the present invention also provides a kind of multi-level buffer management system of Edge Server in CDN, and the system includes:Controller and the multi-level buffer being connected with the controller;The controller includes:Receiving unit, the content requests for receiving user need the content information of acquisition comprising user in the request;Processing unit, for judging that the content whether there is in the multi-level buffer;If it does not exist, then obtaining the content from central server, it is stored according to the caching that preset strategy is the content assignment corresponding level, and by the content in the caching of corresponding level;Transmitting element, for giving the user by the content transmission of storage.
The embodiment of the present invention also provides a kind of multi-level buffer management system of Edge Server in CDN, and the system includes:Controller and the multi-level buffer being connected with the controller;The controller includes:Acquiring unit, for obtaining content to be stored from central server;Allocation unit, for being stored according to the caching that preset strategy is content assignment corresponding level to be stored, and by the content to be stored in the caching of corresponding level.
The multi-level buffer management method and system of Edge Server in a kind of CDN that the embodiment of the present invention is proposed, deployment to multi-level buffer content carries out concentrated controling management, make content initial deployment and content migration it is more flexible, given full play to the performance of multi-level buffer.
Brief description of the drawings
Fig. 1 is the CDN Organization Chart of prior art;
Fig. 2 is the multi-level buffer management flow chart of prior art;
Fig. 3 is the systemic-function Organization Chart of the embodiment of the present invention;
Fig. 4 is that Fig. 6 that Fig. 5 of the multi-level buffer management method flow chart of CDN Edge Servers in the embodiment of the present invention is the multi-level buffer management method flow chart of CDN Edge Servers in the embodiment of the present invention is new content deployment flow chart of the embodiment of the present invention in system operation;Fig. 7 is the location of content adjustment flow chart on multi-level buffer medium of the embodiment of the present invention;Fig. 8 is the overall flow figure of cache management in system operation of the embodiment of the present invention;Fig. 9 is that the embodiment of the present invention distributes video by the column comparison diagram of viewing probability;
Figure 10 is that the embodiment of the present invention distributes video by the Computing Principle schematic diagram of viewing probability;Figure 11 is one of functional block diagram of controller of the embodiment of the present invention 201;
Figure 12 is the two of the functional block diagram of controller of the embodiment of the present invention 201.
Embodiment
The embodiment of the present invention provides Edge Server in a kind of CDN(Edge Server) multi-level buffer management method and system, this method and system are managed collectively multi-level buffer by controller, with lifting system
Overall performance.Specifically, the caching system of Edge Server includes multi-level buffer and controller, and data enter the flowing of cachings at different levels and data between caching media at different levels by controller control from central server;User directly carries out one query request to controller and is assured that content whether there is in multilevel cache system, and the position specifically existed;When the content that user asks is not present, the content of request is obtained from central server by controller, and stored according to the default tactful caching by the content assignment to appropriate level.Whole system functional frame composition is as shown in Figure 3(Thick lines represent data flow, and hachure represents signaling flow).
The content of the embodiment of the present invention includes text, image, music, video etc., video content can typically be divided into multiple fragments to store, and other guide typically can be complete progress store, therefore the embodiment of the present invention will introduce its buffer memory principle respectively to the content of both types.The multi-level buffer management method of the embodiment of the present invention includes following some specific methods:Content initial deployment method when system starts;System is in dynamic running process, and content enters caching system and the method flowed between multi-level buffer.
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is a part of embodiment of the invention, rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art are obtained under the premise of creative work is not made belongs to the scope of protection of the invention.Embodiment 1:
The how multistage level that book applies side edge edge clothes service business device device during example example is provided for knowing clearly in a variety of CCDDNN one by one in fact slowly deposits pipe and manages reason side's method,, party's method pin be directed to be except in depending on video frequency division burst piece content appearances in addition to contents are held in outer its other he pipes of slowly depositing manage reason side's method..
Figure Figure 44 be side edge edge clothes service business device device during book is applied in example example CCDDNN in fact how multistage level slowly deposit pipe manage reason side's method stream flow journey figure figure one by one, the execution row main body body of party's method is the control control device device 220011 processed shown shown in figure Figure 33 institutes,, it is by controlling the active of device device 220011 processed to move into the pipe management reason for entering every trade the more level level and slowly depositing content appearance by control that the institute of method institute of party body, which embodies existing,..As shown as shown in figure Figure 44 institutes, party's method bag is included::
*
S402, according to the caching that preset strategy is content assignment corresponding level to be stored, and the content to be stored is stored into the caching of corresponding level.
Alternatively, S402 includes:Generate the value of the content to be stored;It is the caching of the content assignment corresponding level to be stored according to the sequence of the content value to be stored, and the content to be stored is stored into the caching of corresponding level.
Alternatively, the value of the present embodiment generation content to be stored includes:According to the size S of the content to be stored, priority P R and resource consumption degree R, the value of the content to be stored is generated.
Specifically, if in the stage of initialization deployment, i.e., in multi-level buffer not yet in the case of storage content, the buffer memory management method of the present embodiment includes:The content to be stored is sequentially stored into cachings at different levels according to the order of value height;After previous stage spatial cache is filled with, into rear stage buffer memory.
Specifically, in the case of having stored content in the dynamic operation stage, i.e., cachings at different levels if in system, the buffer memory management method of the present embodiment includes:By the value minimum of storage content is compared step by step in the value of the content to be stored and cachings at different levels;If content value to be stored is higher than in m grades of cachings the value minimum of storage content, less than the value minimum of storage content in m-1 grades of cachings, and m grades be cached with remaining space, then by content to be stored storage to m grades of cachings, if the m grades of no remaining spaces of caching, minimum content is worth during then m grades are cached to be moved to after m+1 grades of cachings, and content to be stored is stored in m grades of cachings.
Alternatively, in the system dynamic operation stage of the present embodiment, in addition to the process that a content flows between cachings at different levels, the process is specifically included:Update in the multi-level buffer value of storage content;When the minimum of content value during the L+1 grades of content values stored in caching are higher than L grades of cachings, corresponding content is swapped with being worth minimum content in L grades of cachings during L+1 grades are cached.
Alternatively, update in the multi-level buffer that the value of storage content includes in the present embodiment:The value of storage content according to updating the size S of the storage content, priority P R, resource consumption degree R and accessed number of times C.
Fig. 5 is the two of the multi-level buffer management method flow chart of Edge Server in the present embodiment CDN, the executive agent of this method is the controller 201 shown in Fig. 3, and what this method was embodied is the method being managed to multi-level buffer content that controller 201 is triggered according to the content requests of terminal 30.As shown in Fig. 5, this method includes:
S501, the content requests for receiving user, need the content information of acquisition comprising user in the request;
5502nd, judge that the content whether there is in the multi-level buffer;
5503rd, if it does not exist, then obtaining the content from central server, it is the caching of the content assignment corresponding level according to preset strategy, and the content is stored into the caching of corresponding level;S504, by the content transmission of storage give the user.
Alternatively, in S503, included according to preset strategy for the caching of the content assignment corresponding level:Generate the value of the content;Sequence according to the content value is the caching of the content assignment corresponding level.
Content is stored because the method for the present embodiment is in the dynamic operation stage of system, cachings at different levels, alternatively, S503 is specifically included:By the value minimum of storage content is compared step by step in the value of the content to be stored and cachings at different levels;If content value to be stored is higher than in m grades of cachings the value minimum of storage content, less than the value minimum of storage content in m-1 grades of cachings, and m grades be cached with remaining space, then by content to be stored storage to m grades of cachings, if the m grades of no remaining spaces of caching, minimum content is worth during then m grades are cached to be moved to after m+1 grades of cachings, and content to be stored is stored in m grades of cachings.
Alternatively, in the dynamic operation stage of system, the method for the present embodiment also includes the process that a content flows between cachings at different levels, specifically includes:Update in the multi-level buffer value of storage content;When the minimum of content value during the L+1 grades of content values stored in caching are higher than L grades of cachings, corresponding content is swapped with being worth minimum content in L grades of cachings during L+1 grades are cached.
The multi-level buffer management method of the present embodiment is described in detail the example for naming a reality:It is another that all storage mediums on the present embodiment Edge Server are arranged to different cache levels according to its performance | and J, the rule of classification can preset or be determined according to user configuring.For example:It is level cache by memory setting for there is internal memory, solid-state hard disk SSD, common hard disc HDD Edge Server, SSD is set to L2 cache, and HDD is set to three-level caching.The minimum content of storage value on the slightly lower content of storage value on storage value highest content, L2 cache, three-level caching on level cache.
Controller can be the cluster of single logic controller part or multiple logic control devices
(cluster:).Controller safeguards the index and positional information of the management information, including but not limited to content of institute's cache contents in cachings at different levels, and for judging the relevant information of its value(The value of content reflects content future accessed probability).The index of content can use URI (Uniform Resource
Identifier, uniform resource identifier)Mark.The multi-level buffer management information that table 1 is stored by controller in the present embodiment.
Table 1
In the starting stage of system operation, controller is random or is deployed to cache contents on different storage mediums according to certain strategy.For example:Disposed, small documents are stored on SSD, big file is stored on HDD according to the size of content;Or disposed according to the content value of setting, value is higher to be stored on SSD, and value is relatively low to be stored on HDD, and the setting of value can be set by content supplier or be set by CDN operators;Or determined according to consumption degree of the content for resource is produced, consume the larger presence SSD of resource, consume the less presence HDD of resource, the consumption degree of resource can include a variety of, such as network bandwidth, if do not hit for big file, then the more network bandwidths of relative small documents can be consumed;Or caching media at different levels are filled step by step in sequence;Or
Person integrates according to Multiple factors and determined.
For content to be stored, it is assumed that the size of the content of the present embodiment is S, content prioritization is PR, and content is R for the consumption degree of resource, and the value formula of content to be stored can be expressed as: H = R*PR/S.
For the value of cache contents needs to carry out real-time servicing after content is accessed in system, assuming that the size of content is S, the content prioritization that content supplier specifies is PR, and content is R for the consumption degree of resource, and the accessed number of times of content accumulation is..The value formula of storage content can be: H = C*PR*R/S.
In order to improve initial deployment efficiency, the value calculation of all the elements on central server can be come out in advance, then be ranked up, be directly deployed to according to the order sequenced in multi-level buffer, reduce the process replaced repeatedly.
Fig. 6 is given at the new content deployment flow chart in system operation.Assuming that the caching system is that SSD is set into level cache, HDD is set to the two-level cache system of L2 cache, and all the elements are on caching media at different levels according to value sequence distribution.
As shown in fig. 6, the content do not hit for terminal request, notifies central server to want download new content by controller, and whether the content without hit asked by controller according to certain strategy decision should be buffered(If it is also low that minimum content is worth on the value ratio lowermost level caching of new content, it is not cached in system, directly abandons being streamed to after terminal), and should be buffered on which rank of caching medium.
In the present embodiment in the process of running, need to control it to carry out position migration according to running situation by master controller for the hit content that user asks.To give full play to systematic entirety energy.Fig. 7 is provided in system SSD being set to level cache, and HDD is set to the position adjustment flow on multi-level buffer medium when content is hit under the two-level cache system situation of L2 cache.Fig. 8 is provided in system SSD being set to level cache, HDD is set to the overall flow in running under the two-level cache system situation of L2 cache, " decide whether to be buffered, and be buffered in the detailed process which position " Walk corresponds to Fig. 6 suddenly in the flow;" determining whether cache contents will carry out position migration " and " enter line position
Migration is put, " Zhe Liang Ge Walk correspond to Fig. 7 detailed process to change index suddenly.
The present embodiment is managed using central controlled mode to the non-video burst content in multi-level buffer, is specifically included:System start at the beginning of to content carry out initial deployment method, in system operation to new content enter dispositions method, and between cachings at different levels carry out content migration method.As a result of centralized control, make Content Management more flexibly convenient, improve the performance of multi-level buffer.
Embodiment 2:A kind of multi-level buffer management method of Edge Server in CDN is present embodiments provided, this method is directed to the multi-level buffer management of video slicing content.In video cache application, a complete video is typically divided into multiple fragments and cached, because the probability that each fragment is accessed is different, so the value of each fragment is also different, it is necessary to treat respectively.The idiographic flow of this method referring to Fig. 4-Fig. 8, except that, the value generation method and common content of video slicing content are different.
In the present embodiment, generating the value of the video segment to be stored includes:According to the video segment to be stored period residing in complete video file, the probability P that the video segment to be stored is watched is generated;According to the size S of the video segment to be stored, priority P R, resource consumption degree R and the probability P watched, the value of the video segment to be stored is generated.
In the present embodiment, the value of storage content includes in the renewal multi-level buffer:The video segment period residing in complete video file has been stored according to described, and the probability P that video segment is watched has been stored described in generation;According to the size S of the storage content, priority P, resource consumption degree R, accessed number of times C and the probability P watched, the value of video segment has been stored described in renewal.Name the example of a reality and how cache management is carried out to the video segment in the multi-level buffer of CDN Edge Servers to describe in detail.
All storage mediums on Edge Server are arranged to different caching ranks according to its performance.The rule of classification can preset or be determined according to user configuring.For example for there is internal memory, solid-state hard disk SSD, memory setting can be level cache, SSD by common hard disc HDD Edge Server
L2 cache is set to, HDD is set to three-level caching.
Controller can be the cluster (cluster of single logic controller part or multiple logic control devices:).Controller safeguards the management information of institute's cache contents in all caching media(Here cache contents are the video segment of cutting), main index and positional information including content, and for judging the relevant information of its value(The value of content reacts content future accessed probability).The index of content is identified with URI(Uniform resource identifier: Uniform Resource Identifier) .Table 2 is by the management information that is stored in the present embodiment controller.
Table 2
Video frequency program carries out burst by certain rule, and burst rule has a variety of, can uniform burst, the video frequency program of 2 hours durations altogether is for example divided into 20 fragments, each fragment about 6 minutes.Because tending to watch previous section for many users of video frequency program, for same film, user's viewing ratio of its various pieces is elapsed and successively decreased over time, therefore can be with uneven burst, the burst size of previous contents is set to be less than content behind, i.e. burst is bigger in the backward, so as to more accurately be managed previous section.
Table 3 is a kind of actual burst example:
Fragment sequence number is segmented initial time
1 0-6
2 6-12
3 12-24
4 24-48
5 48-120
Table 3
In the starting stage of system operation, controller is random or is deployed to cache contents on different storage mediums according to certain strategy.For example disposed, small documents are stored on SSD, big file is stored on HDD according to the size of content;Or disposed according to the content value of setting, value is higher to be stored on SSD, and value is relatively low to be stored on HDD, and the setting of value can be set by content supplier or be set by CDN operators;Or determined according to consumption degree of the content for resource is produced, consume the larger presence SSD of resource, consume the less presence HDD of resource, the consumption degree of resource can include a variety of, such as network bandwidth, if do not hit for big file, then the more network bandwidths of relative small documents can be consumed;Or caching media at different levels are filled step by step in sequence;Or integrate decision according to Multiple factors.
If a film is carried out into burst storage, then the probability that different sliced pieces is watched according to its position in whole film is different,(Behind the probability that most of film previous section is watched is higher than, because many users are out of patience), each burst it is as shown in Figure 9 by viewing probability.Position that can be according to belonging to burst in whole film obtains its viewing probability function(Probability embodies certain value), the function is similar to exponential function, if simple process, is exactly an inverse proportion function, it is assumed that the viewing probability P 1 that calculate any one moment t in the film that total duration is τ is:
Pl=l- t/T
So according to the position of burst, it is assumed that from the tl moment to the t2 moment, the accumulative probability P watched of the fragment is by the function P1 areas that this section of curve and transverse axis t are surrounded from tl to t2, as shown in Figure 10,
P= ( ( 1-tl/T) +(l -t2/T》* (t2-tl)/2=(1-(tl+t2)/(2*T)) * (t2-tl) is S for example for some video segment size to be stored, the content prioritization that content supplier specifies is PR, it is R for the consumption degree of resource to produce content, the corresponding time migration of video segment is t=t2- tl, then the value formula of the fragment can be:
H = (1 - (tl+t2 ) 12 )*(t2-tl)*PR*R/S。
Content value for video segment buffered in system needs to carry out in fact after content is accessed
When safeguard, such as setting video clip size is S, and the content prioritization that content supplier specifies is PR, and it is R for the consumption degree of resource to produce content, and the accessed number of times of content accumulation is C, and the corresponding time migration of video segment is t=t2- tl.Then the value formula of the fragment can be:
H = (1 - (tl+t2) 12 )*(t2-tl)*C*PR*R/S。
Whether the content without hit asked in the process of running by controller according to certain strategy decision should be buffered, and should be buffered on which rank of caching medium, for example can be according to the value of content, the size of content, content is produced for the consumption degree of resource, the resource behaviour in service of cachings at different levels.Thus determine the content of new suitable caching being stored to suitable position, the content not used is replaced away.Idiographic flow can be referring again to Fig. 6.
Need to control it to carry out position migration according to running situation by master controller for the hit content that user asks in the process of running, to give full play to systematic entirety energy, idiographic flow can be referring again to Fig. 7.The entire flow of the present embodiment can be referring again to Fig. 8.
The present embodiment is managed using central controlled mode to the non-video burst content in multi-level buffer, is specifically included:System start at the beginning of to content carry out initial deployment method, in system operation to new content enter dispositions method, and between cachings at different levels carry out content migration method.As a result of centralized control, make Content Management more flexibly convenient, improve the performance of multi-level buffer.Also, in view of general custom of the user when watching video segment, the present embodiment have also been devised the value assessment method of video segment, to provide the foundation for more conforming to actual conditions for cache management, make cache management more efficient.
Embodiment 3:The present embodiment provides a kind of multi-level buffer management system of Edge Server in CDN, and the system architecture diagram is referring again to Fig. 3.The system includes:Controller 201 and the multi-level buffer 202 being connected with the controller.Figure 11 is the functional block diagram of the present embodiment controller 201, and as shown in figure 11, the controller includes:Receiving unit 1101, the content requests for receiving user need the content information of acquisition comprising user in the request;Processing unit 1102, for judging that the content whether there is
In the multi-level buffer;If it does not exist, then obtaining the content from central server, it is the caching of the content assignment corresponding level according to preset strategy, and the content is stored into the caching of corresponding level;Transmitting element 1103, for giving the user by the content transmission of storage.
The processing unit 1102 includes:It is worth generation unit, the value for generating the content;Storage control unit, for being the caching of the content assignment corresponding level to be stored according to the sequence of the content value, and the content to be stored is stored into the caching of corresponding level.
Alternatively, the value generation unit, is additionally operable to update in the multi-level buffer value of storage content;The storage control unit, be additionally operable to when during L+1 grades cache the content values that are stored higher than L grade cachings in content value minimum when, content and the accordingly during L+1 grade are cached
Minimum content is worth in L grades of cachings to swap.
Specific buffer memory management method performed by the controller has been carried out elaborating in Examples 1 and 2, and here is omitted.Embodiment 4:The present embodiment provides a kind of multi-level buffer management system of Edge Server in CDN, and the system architecture diagram is referring again to Fig. 3.The system includes:Controller 201 and the multi-level buffer 202 being connected with the controller.Figure 12 is the functional block diagram of the present embodiment controller 201, and as shown in figure 12, controller 201 includes:Acquiring unit 1201, for obtaining content to be stored from central server;Allocation unit 1202, for being the caching of content assignment corresponding level to be stored according to preset strategy, and the content to be stored is stored into the caching of corresponding level.
The allocation unit 1202 of the controller includes:It is worth generation unit, the value for generating the content to be stored;Storage control unit, for being the caching of the content assignment corresponding level to be stored according to the sequence of the content value to be stored, and the content to be stored is stored into the caching of corresponding level.
Alternatively, the value generation unit, is additionally operable to update in the multi-level buffer value of storage content;The storage control unit, is additionally operable to when the content value stored in L+1 grades of cachings is high
In being cached in L grades during the minimum of content value, corresponding content is swapped with being worth minimum content in L grades of cachings during L+1 grades are cached.
The multi-level buffer management method and system of Edge Server in a kind of CDN that the embodiment of the present invention is proposed, deployment to multi-level buffer content carries out concentrated controling management, make content initial deployment and content migration it is more flexible, given full play to the performance of multi-level buffer.The program has taken into account carrying cost and systematic function, protects original hardware investment.The system of the embodiment of the present invention can also carry out active/standby backup to controller, to improve the reliability of system.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, computer program is can be by instruct the hardware of correlation to complete, described program can be stored in a computer read/write memory medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
Above example is only to the technical scheme for illustrating the embodiment of the present invention, rather than its limitations;Although the embodiment of the present invention is described in detail with reference to the foregoing embodiments, it will be understood by those within the art that:It can still modify to the technical scheme described in foregoing embodiments, or carry out equivalent substitution to which part technical characteristic;And these modifications or replacement, the essence of appropriate technical solution is departed from the spirit and scope of each embodiment technical scheme of the embodiment of the present invention.