Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
The term "including" and variations thereof as used herein is intended to be inclusive in an open-ended fashion, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment" and the term "another embodiment" means "at least one other embodiment". Related definitions of other terms will be given in the description below.
The term "overmolded" as used herein refers to the process of converting a drawn engineering design drawing into a digital model. Taking BIM as an example, a building design institute can convert a two-dimensional drawing into a three-dimensional building model through BIM turnover, and the BIM turnover is used for visually checking and optimizing the design, so that the working efficiency is improved. The constructor can turn the latest two-dimensional drawing of the design institute into a three-dimensional building model in time so as to avoid information deviation caused by inconsistent drawing models, and the construction flow management is optimized by using rich information in the raw model, including but not limited to timely adjusting the material quantity, adjusting engineering schedule and construction sequence ordering, controlling construction cost, analyzing cash service condition and the like. The building operation and maintenance party can also generate a BIM model according to the drawing, and the technologies of a sensor, the Internet of things and the like are combined to realize real-time and visual monitoring of the building state so as to reduce operation and maintenance cost and improve operation and maintenance efficiency.
The automatic turning of the mould requires the extraction of different types of components and labeling words from the drawing. However, since most drawings do not name layers or arrange layer contents in a standard manner, a plurality of different components or marked words may be contained in the same layer, so that different types of components and marked words cannot be separated by simple layer matching. Some modeling software attempts to extract component information from drawings using Artificial Intelligence (AI) methods such as deep learning and convolutional neural networks to automate the process of turning the model (such as BIM-turning). However, training AI models for the flip-up requires manual labeling of a large number of drawings, thereby consuming a large amount of manpower. In addition, because of the large size of engineering drawings, using a computationally intensive AI method to flip it is time consuming and consumes significant GPU resources. In addition, due to noise of engineering drawings, limitations of automatic recognition models, and the like, the model accuracy converted by the AI model often cannot meet engineering requirements.
To at least partially solve the above-described problems, as well as other potential problems, embodiments of the present disclosure propose a solution for a method of generating a digital model of a design drawing. This approach selects lower resolution images for computationally intensive drawing recognition by the AI model, thereby increasing the speed of the AI extraction component. Pixel level detection of the super-resolution restored image or the high-power image is then used to interactively adjust the identified data to improve the accuracy of the identified data, thereby generating a high-accuracy digital model.
In addition, the scheme also introduces a feedback mechanism when training an AI model for drawing recognition. The feedback mechanism uses less labeling data in the initial training set of AI models and iteratively improves the training data set with data that has improved accuracy in the rollover process using the scheme in order to retrain the AI models. Therefore, the workload of manual labeling can be reduced, the noise influence in drawing labeling can be reduced in the iterative training process, the quality of an AI model is improved, and the model finally and effectively meets the engineering use requirement.
For purposes of illustration, the scheme will be described in the context of building drawing overmolded in this disclosure, but it should be understood that the same techniques are applicable to other similar overmolded scenarios, such as industrial design overmolded, and the like.
FIG. 1 illustrates a schematic diagram of an example environment 100 in which embodiments of the present disclosure may be implemented. Environment 100 includes a computing device 101. Although shown as a single device, computing device 101 may also be a plurality of devices, virtual devices, or any other form of device suitable for implementing embodiments of the disclosure.
For illustration, different images in the computing device 101, namely, the first image 110 and the second image 130, are also shown in fig. 1. The image 110 and the second image 130 may be images of the same design drawing with different resolutions. In some embodiments, the second image 130 may be converted from the first image 110, as will be described later. According to some embodiments of the present disclosure, the computing device 101 may extract the first data set 120 from the first image 110 and the second data set 140 from the second image 130. Furthermore, using methods according to embodiments of the present disclosure, the computing device may also utilize the first data set 120 and the second data set 140 to generate a digital model 150 of the design target. This will be described in more detail below in connection with other figures.
It should be understood that the environment 100 is described for illustrative purposes only and does not imply any limitation on the scope of the present disclosure. The environment 100 may also include devices, components, and other entities not shown in fig. 1. Also, embodiments of the present disclosure may also be applied in environments other than environment 100.
FIG. 2 illustrates a flow chart of a method 200 of generating a digital model from a design drawing in accordance with some embodiments of the present disclosure. The method 200 may be performed, for example, by the computing device 101 (more specifically, a processor of the computing device 101). The method 200 is described in detail below with reference to fig. 1.
At block 210, the computing device 101 extracts a first data set 120 from the first image 110. The first image 110 is an image of a drawing of a design target, such as a building design drawing, and the first data set 120 includes at least the location, geometry, and type of one or more types of components of the design target. Taking the architectural design drawing as an example, the computing device 101 may extract the geometry, location, and associated labeling information, etc. of the wall, door, window, etc. types of components from the architectural drawing of the design target, and may extract the geometry, location, and associated labeling information, etc. of the wall, beam, column, etc. components from the structural drawing of the design target.
In various embodiments, the first image 110 may have any suitable format, including, but not limited to, an image format DWG format used by Computer Aided Design (CAD), a Portable Document Format (PDF), a Portable Network Graphics (PNG) format, a joint photographic experts group (JPG) format, a Bitmap (BMP) format, and the like. In various embodiments, taking a building as an example, the design drawing of the first image 110 may be a plan view, an elevation view, a cross-section view, or the like.
When the first image 110 is, for example, an image of a non-standardized drawing containing a variety of different components or annotated words, the computing device 101 may use a trained recognition model (such as an AI model) to extract the first data set 120 from the first image 110. In some embodiments, the computing device 101 trains the recognition model using a training set obtained by annotating component information in the design drawing. Data extraction based on AI models and the like typically requires a significant amount of resources and time. Thus, in some embodiments that are computationally intensive in this way, the first image 110 used to extract the first data set 120 may have a suitably lower resolution. The resulting first data set 120 thus includes less accurate geometry and location information for further processing.
The extraction of the first dataset 120 is described in more detail below in connection with fig. 3. FIG. 3 illustrates a flow chart of a method 300 of extracting a dataset for generating a digital model from an image of a design drawing, in accordance with some embodiments of the present disclosure. The method 300 may be performed, for example, by the computing device 101. The method 300 is described in detail below with reference to fig. 1.
At block 310, computing device 101 may extract a component data set from first image 110, the component entries in the component data set including at least a location, a geometry, and a type of the component. In some embodiments, computing device 101 may utilize the trained AI model to integrally identify and extract the locations, geometric information, and types of corresponding instances (e.g., walls) of the plurality of types of component instances included in the drawing.
At block 320, the computing device 101 may extract a set of annotation data from the first image 110, the annotation entry in the annotation data set including at least a location and text content of the annotation. For example, the text may contain dimensional values of the component and additional information about various properties of the component. In some embodiments, computing device 101 may utilize the trained AI model to integrally identify and extract annotation data included in the drawing.
At block 330, the computing device 101 may each associate a component entry in the component dataset with a annotation entry in the annotation dataset that corresponds to the same component, thereby generating the first dataset 120. In some embodiments, the computing device 101 may match the component with the corresponding labeling text based on the relative positional relationship and the wiring between each other, etc., resulting in a first data set 120 comprising the component's position, range, size, type, and/or other various information (such as reinforcement).
It should be understood that the present disclosure describes component data sets and labeling data sets for ease of illustration only, and is not intended to limit the extracted data to a particular format or form. For example, in some implementations, the component dataset and the annotation dataset may be stored in the same dataset for further processing. It should also be understood that the methods of the present disclosure are not limited by the particular order of extraction of the component data and the annotation data. The component data and annotation data may be extracted in the reverse order as shown in fig. 2, in parallel, or in any other suitable order.
With continued reference to fig. 2. At block 220, the computing device 101 extracts a second dataset 140 from the second image 130 of the drawing, the second dataset 140 including the locations and geometries of one or more types of components of the design objective. The resolution of the second image 130 is higher than the resolution of the first image 110. Further, the positions and geometries of the respective components included in the second data set 140 have a higher accuracy than the positions and geometries of the respective components included in the first data set 120. Thus, the correction dataset 140 may be used as base data to correct less accurate geometry and positional information in the first dataset.
In some embodiments, the computing device 101 may employ a corresponding different method for the first image 110 in a different format to enhance image quality to obtain the second image 130. In some embodiments, for example, when the first image 110 is in a PDF or DWG format, the computing device 101 may obtain the second image 130 by increasing the output image magnification of the first image 110 or performing super-resolution reduction on the first image 110. In some embodiments, for example, when the first image 110 is in PNG format, the computing device 101 may perform super-resolution reduction on the first image 110 to obtain the second image 130.
In some embodiments, computing device 101 may utilize a pixel-level shape recognition method to extract second data set 140 from second image 130. For example, computing device 101 may use methods of straight line detection and arc detection, etc., at the pixel level to detect some unique features of a certain type of component (e.g., in a building drawing, a wall may be a double straight line, and a door may have an arc, etc.), thereby extracting the exact location and geometry of the component. In some embodiments, the computing device 101 may utilize the component information in the first dataset 120 to find an approximate area range of the corresponding component in the second image 130. The computing device 101 may then perform shape recognition detection for pixels within the area of the component, resulting in highly accurate positional and geometric information of the component.
At block 230, the computing device 101 corrects the first data set 120 based at least on the second data set 140. The computing device 101 may use the information in the second data set 140 acquired at block 220 as baseline information for correction. In some embodiments, the computing device 101 may align the location and geometry of a component in the first data set 120 with the location and geometry of a corresponding component in the second data set 140, i.e., modify the location and geometry of the component to the geometry of a corresponding more accurate location in the second data set 140.
In some embodiments, after correcting the first data set 120 with the second data set 140, the computing device 101 may utilize information such as drawing proportions to convert values representing the position and geometry of the component in the first data set 120 to corresponding values in a physical coordinate system to obtain the physical dimensions of the component. The values of the physical dimensions thus obtained are generally not integers, which are generally not in line with the actual situation of the original design drawing. In some embodiments, the computing device 101 may modulo the converted physical size based on one or more specified moduli and take the rounded size value as a further correction result. For example, where the design objective is a building, the elements in the drawing are typically designed as multiples of some standard size unit (i.e., building modulus). In this case, the computing device 101 may modify (i.e., round) the dimensional value (e.g., side length) of the component to be a multiple of the corresponding building modulus closest to the dimensional value.
Unlike AI model-based extraction, which often requires GPU resources, the steps of blocks 220 and 230 typically use CPU resources, so that precision improvement can be achieved while saving GPU resources.
In some embodiments, during correction of the first data set 120, the computing device 101 may interactively visualize at least one of the first data set 120 and the second data set 140 on a user interface to enable interactive adjustment. For example, computing device 101 may present automatically identified and/or matched component instances in the form of straight line segments, polyline segments, rectangular boxes, polygonal boxes, and circles, etc., on an interactive interface for manual editing of the displayed shapes by a user to fine-tune the shapes. This can ensure the accuracy of information extraction and avoid the problem of inaccurate automatic identification and matching of AI models and the like.
At block 240, the computing device 101 generates a digital model of the design target based at least on the corrected first data set 120. For example, the computing device 101 may convert the corrected first data set 120 into a model file format supported by common three-dimensional modeling software, such as RVT, OBJ, FBX, and the like, as a digital model or portion thereof.
It should be appreciated that additional actions not shown may also be included in the method 200. For example, the system 101 may convert the first image 110 into a processable file format before extracting the data using the first image 110. In some embodiments, the system 101 may convert the upper first image 110 to a picture format that the AI model is capable of handling based on a file format indicated by the user or identified by a file extension or header. In some embodiments, the system 101 may output the generated model files to the user side for viewing or adjustment by the user using general modeling software (such as BIM modeling software) or the like.
FIG. 4 illustrates a non-limiting example interface 400 according to some embodiments of the present disclosure, the example interface 400 being used to select an image of a design drawing that may then be used in a method (e.g., method 200) according to embodiments of the present disclosure to generate a digital model of the design drawing. For example, the example interface 400 may be displayed by a display of the computing device 101 of fig. 1 to guide a user in uploading an image of a two-dimensional drawing. This image may then be used, for example, by computing device 101 in method 200 of fig. 2 (e.g., as first image 110). The example interface 400 is described in detail below with reference to FIG. 1.
The example interface 400 includes a button 410. The computing device 101 may enable (e.g., a user of the computing device 101) selection of an image of a design drawing to be used to generate the digital model in response to clicking on the button 410, and a path of the selected image may be displayed in the text box 405. The image may be in a common file format such as DWG, PDF, PNG, JPG or BMP. Further, the design drawing may be a drawing of a certain type among a plurality of types. Taking the building drawing as an example, the design drawing can adopt a plan view, an elevation view, a section view or the like with different visual angles. In some embodiments, in response to clicking button 410, computing device 101 may support bulk uploading of a set of images. It should be appreciated that using a set of images (e.g., images of different portions of a design drawing) for generating a digital model (e.g., as the first image 120) does not depart from the principles and scope of the present disclosure.
The example interface 400 also includes a drop down box 420 and a drop down box 430. In some embodiments, during selection of an image using the example interface 400, the computing device 101 may take the format (e.g., DWG, PDF, PNG, JPG and BMP, etc.) selected in the drop-down box 420 (e.g., by the user) as the format of the selected image for subsequent operations (e.g., converting the selected image to a format that the AI model can handle, etc.). In some embodiments, during selection of an image using the example interface 400, the computing device 101 may take the type (e.g., plan view, elevation view, cross-sectional view, etc.) selected in the drop-down box 430 (e.g., by a user) as the type of image selected for subsequent operations (e.g., selecting an appropriate extraction method, etc.). Therefore, the problem that the file extension name is inconsistent with the actual format or the condition that the marked characters in the drawing are inconsistent with the viewing angle adopted by the drawing can be avoided.
Methods (e.g., method 200) according to embodiments of the present disclosure may be used to generate digital models from a plurality of different types of design drawings (e.g., drawings that take different perspectives) of a design target. FIG. 5 illustrates an example process 500 of generating a digital model from a plurality of design drawings in accordance with some embodiments of the present disclosure. The example process 500 may be performed by the computing device 101, and the example process 500 will be described below with reference to fig. 1.
In FIG. 5, image 510-1, image 510-2, and image 510-3 (collectively image 510) may be different types of drawings (e.g., may be a plan view, an elevation view, and a cross-sectional view, respectively) of the design target. In the example process 500, the computing device 101 acquires a dataset 520-1 from the image 510-1 in an extract and syndrome process 505-1. For example, computing device 101 may take image 510-1 as first image 110 and use method 200 to extract and correct first data set 120-1 as data set 520-1. In a similar manner, computing device 101 may obtain data set 520-2 from image 510-1 in extraction and correction sub-process 505-2 and data set 520-3 from image 510-3 in extraction and correction sub-process 505-3.
The computing device 101 then integrates the extracted and corrected data sets 520-1, 520-2, and 520-3 in a data integration sub-process 515 to obtain integrated data 525. For example, computing device 101 may check the repeated data against each other and remove redundant data, resulting in integrated data 525. The integrated data 525 is then used by the computing device 101 in a model generation sub-process 535 to generate a digital model 550. For example, the computing device 101 may convert the integrated data 525 into a model file format (e.g., RVT, OBJ, FBX, etc.) supported by modeling software as the digital model 550.
It should be appreciated that at least some of the sub-processes in process 500 (e.g., extraction and correction sub-process 505-1, extraction and correction sub-process 505-2, and extraction and correction sub-process 505-3) may be performed sequentially, in parallel, or in other suitable order without departing from the scope of the present disclosure. Further, more or fewer images 510 may be used in the example process 500, depending on the particular application scenario.
The method and the process can achieve automatic or semi-automatic turnover of the model from any format (standardized and non-standardized) design drawing, and can automatically generate a digital model by acquiring very accurate data with little manpower on the basis of automatically extracting and correcting drawing information. Compared with the conventional automatic mold turning scheme, the scheme disclosed by the invention has wider application range, and can ensure high mold turning precision even when the automatically extracted building information is not completely accurate. Taking BIM turnover as an example, the scheme disclosed by the disclosure can be used by a building design institute, a constructor and an operation and maintenance party to automate the BIM turnover process so as to use fewer resources and obtain the required BIM more timely, thereby improving the working efficiency.
After the data used to generate the digital model is acquired using the methods described above, some embodiments of the present disclosure further include a process that provides feedback on the automatic extraction of drawing data. The feedback process may be performed by the computing device 101, which is described in detail below with reference to fig. 1.
In some embodiments, the computing device 101 may use the first recognition model to extract the first data set 120 from the first image 110. In some embodiments, the first recognition model is determined using a training set obtained by manually annotating component information in the design drawing. For example, information in the first image 110 of the drawing that is suitable for training the first recognition model may be manually annotated and stored as a training set. The type of information that needs to be tagged depends on the particular application, and aspects of the present disclosure are not limited thereto. The training set may then be used to train the first recognition model. In training the first recognition model, the system 101 may use a smaller training set that labels only a portion of the component information in the drawing sheet to determine the first recognition model relatively quickly.
After extracting the first data set 120 using the first recognition model, the computing device 101 may automatically or semi-automatically correct the first data set 120 using steps in methods according to embodiments of the present disclosure (e.g., block 220 and block 230 of method 200) and determine a second recognition model for extracting the component from the image using the corrected first data set 120. In some embodiments, computing device 101 may retrain the recognition model using the corrected first data set 120 as a training set to obtain a second recognition model. In other embodiments, the computing device 101 may utilize the corrected first dataset 120 and an automatic annotation algorithm to automatically generate new annotation data for training from the first image 110. The computing device 101 may then retrain the recognition model as a training set after optionally making a small amount of manual adjustments to the new annotation data to obtain a second recognition model.
Since the quality of the training set used to train the second recognition model is improved during the feedback process, the quality of the second recognition model trained using the training set of higher quality will be higher than the first recognition model. It should be appreciated that the computing device 101 may iteratively perform the feedback process described above to obtain an identification model with a desired quality for digital model generation (e.g., BIM flip) in accordance with an embodiment of the present disclosure.
FIG. 6 illustrates a schematic block diagram of an apparatus 600 for generating a digital model from a design drawing, in accordance with some embodiments of the present disclosure. The apparatus 600 may include an extraction module 610, a correction module 620, and a generation module 630, wherein the extraction module 610 is configured to extract a first dataset from a first image of a drawing of the design target, the first dataset including at least a location, a geometry, and a type of one or more types of components of the design target, and to extract a second dataset from a second image of the drawing having a higher resolution, the second dataset including a location and a geometry of one or more types of components of the design target, the location and geometry of the corresponding components included in the second dataset having a higher accuracy than the location and geometry of the corresponding components included in the first dataset, the correction module 620 is configured to correct the first dataset based at least on the second dataset, and the generation module 630 is configured to generate a digital model of the design target based at least on the corrected first dataset.
In some embodiments, the extraction module 610 extracts the first dataset includes extracting a component dataset from the first image, the component entries in the component dataset including at least the location, geometry, and type of the component, extracting a annotation dataset from the first image, the annotation entries in the annotation dataset including at least the location and literal content of the annotation, and generating the first dataset by associating the component entries in the component dataset with the annotation entries in the annotation dataset that correspond to the same component, respectively.
In some embodiments, the correction module 620 corrects the first data set includes aligning the position and geometry of the corresponding member in the first data set with the position and geometry of the corresponding member in the second data set.
In some embodiments, the correction module 620 corrects the first dataset further includes converting values in the aligned first dataset representing the position and geometry of the component to corresponding values in a physical coordinate system to obtain a physical size of the component and modulo rounding the physical size of the component in the first dataset based on one or more specified moduli.
In some embodiments, the extraction module 610 is further configured to acquire the second image by at least one of increasing an output image magnification of the first image, or performing super-resolution reduction on the first image.
In some embodiments, the extraction module 610 is further configured to use a pixel-level shape recognition method to obtain the location and geometry of the components in the corrected image set from the second image.
In some embodiments, the correction module 620 is further configured to interactively visualize at least one of the first data set and the second data set on the user interface during correction of the first data set.
In some embodiments, the extraction module 610 is further configured to extract a first dataset of components of the design target from a first image of the design target using the first recognition model.
In some embodiments, the apparatus 600 further includes a training module 510, the training module 640 being configured to determine a first recognition model thereof using a training set obtained by annotating component information in the design drawing.
In some embodiments, the training module 640 is further configured to determine a second recognition model for extracting the dataset of the component from the image using the corrected first dataset.
In some embodiments, the generation module 630 may convert the corrected first dataset into a model file format supported by common three-dimensional modeling software, such as RVT, OBJ, FBX, and the like, as a digital model or portion thereof of the design objective. In some embodiments, the generation module 630 may integrate the plurality of corrected first data sets and then convert the integrated plurality of corrected first data sets into a common model file format.
It should be appreciated that, depending on the particular implementation, the operations described above as being performed by a module of the apparatus 600 may also be performed by another module or multiple modules, and the operations described as being performed by multiple modules may also be performed by a single module, the scope of the present disclosure is not limited in this respect. It should also be appreciated that the apparatus 600 may also include modules not shown in fig. 6, such as modules that perform general modeling operations such as data integration.
Fig. 7 illustrates a block diagram of a computing device 700 capable of implementing various embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including an input unit 706, e.g., keyboard, mouse, etc., an output unit 707, e.g., various types of displays, speakers, etc., a storage unit 708, e.g., magnetic disk, optical disk, etc., and a communication unit 709, e.g., network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods described above (e.g., method 200). For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the various methods described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the various methods described above by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.