CN118519637A - A compiling method, parsing method and device - Google Patents
A compiling method, parsing method and device Download PDFInfo
- Publication number
- CN118519637A CN118519637A CN202310154098.5A CN202310154098A CN118519637A CN 118519637 A CN118519637 A CN 118519637A CN 202310154098 A CN202310154098 A CN 202310154098A CN 118519637 A CN118519637 A CN 118519637A
- Authority
- CN
- China
- Prior art keywords
- array
- instruction group
- instruction
- instructions
- offset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Devices For Executing Special Programs (AREA)
Abstract
The application provides a compiling method, an analyzing method and a device, which relate to a byte code file, wherein a plurality of constant pools can be used for storing offset values of data, and when the storage margin of the created constant pools is insufficient, a new constant pool is created for continuously storing the offset values of the data, so that each number group is ensured not to exceed the quantity limit of the offset values, the reference bit width of a reference instruction is ensured, the volume of the byte code file is reduced, and when the quantity of the data exceeds the quantity limit of the offset of one number group, the source code file can be compiled into one byte code file by creating a new number group mode, thus the reading times of the byte code file can be reduced, and the I/O expenditure and the performance can be reduced.
Description
Technical Field
The embodiment of the application relates to the technical field of electronics, in particular to a compiling method, an analyzing method and a device.
Background
The application developer compiles the source code file into a byte code file through a compiler, packages the byte code file and other files into an application installation package, and then publishes the application installation package to the application platform. The user downloads an application installation package issued by an application developer on an application platform through the terminal, the terminal analyzes the byte code file from the application installation package, and invokes a virtual machine in the terminal to execute the byte code file, so that the starting and running of the application are completed.
There are a large number of reference instructions in the byte code file, and the reference bit width of the reference instructions, that is, the length of the address code of the reference instruction, directly affects the volume and execution performance of the byte code file. The reference bit width of the reference instruction is controlled by limiting the number of data referenced in the byte code file, for example, if the number of referenced data is limited to 65536, then the encoding of 65536 data can be completed by 2 bytes, or 65536 data can be distinguished by 2 bytes, that is, the reference bit width of 2 bytes can be adopted.
But for relatively large applications the amount of data referenced is easily beyond the limit. When the amount of data referenced exceeds the limit, either the application is split into multiple bytecode file stores or the reference bit width is increased. Dividing the application into a plurality of bytecode files for storage increases the number of times the bytecode files are read, resulting in increased input/output (I/O) overhead and degraded performance; the use of long reference bit widths can result in a large size of the bytecode file.
Disclosure of Invention
The embodiment of the application provides a compiling method, an analyzing method and a device, which can store more data in a byte code file on the premise of not increasing the reference bit width of a reference instruction.
In a first aspect, a compiling method is provided, which may be performed by a compiling apparatus. Alternatively, the compiling apparatus may be an electronic device, or may be executed by a module or unit (such as a compiler or a virtual compiler) in the electronic device.
The method comprises the following steps: acquiring a source code file; compiling the source code file to obtain a byte code file, wherein the byte code file comprises a first instruction group and a second instruction group, the offset value of the data referenced by the reference instruction in the first instruction group is stored in the first array, the offset value of the data referenced by the reference instruction in the second instruction group is stored in the second array, and the second array is created when the storage margin of the first array is insufficient.
In the method, the byte code file stores the offset value of the data by using a plurality of arrays (such as a first array and a second array), and when the storage margin of the created arrays (such as the first array) is insufficient, a new array (the second array) is created to continuously store the offset value of the data, so that each array can be ensured not to exceed the limit of the number of the offset values, the reference bit width of the reference instruction can be ensured, the volume of the byte code file can be reduced, and when the number of the data exceeds the limit of the number of the offset of one array, the source code file can be compiled into one byte code file by creating the new array, so that the reading times of the byte code file can be reduced, the I/O overhead can be reduced, and the performance can be improved. Thus, based on the above method, more data can be stored in the bytecode file without increasing the reference bit width of the reference instruction.
With reference to the first aspect, in a possible implementation manner, the byte code file further includes a third array, where the third array includes an offset value of the first array and an offset value of the second array; the first instruction set includes an index of the first array in the third array; the second instruction set includes an index of the second array in the third array. This approach may indirectly read the offset value of the array storing the data offset value through the index in the instruction set, the index value of the array being smaller relative to the offset value of the array, thus helping to reduce the volume of the bytecode file.
With reference to the first aspect or any implementation manner thereof, in another possible implementation manner, the first instruction set includes an offset value of the first array; the second instruction set includes an offset value of the second array. The method can directly read the offset value of the array storing the data offset value in the instruction set, and the instruction analysis speed is higher.
With reference to the first aspect or any implementation manner thereof, in another possible implementation manner, a total amount of the offset values that the first array and/or the second array support to store is less than or equal to 65536; the reference bit width of the reference instruction in the first instruction group and/or the reference bit width of the reference instruction in the second instruction group is less than or equal to 2 bytes.
Where the total amount of array-supported stored offset values is less than 65536 and the reference bit width of the reference instruction is less than 2 bytes, such as 256 for the total amount of array-supported stored offset values and 1 byte for the reference instruction, the volume of the bytecode file may be further reduced because a smaller reference bit width than 2 bytes is used, as compared to 65536 for the total amount of array-supported stored offset values and 2 bytes for the reference instruction.
With reference to the first aspect or any implementation manner thereof, in another possible implementation manner, the byte code file further includes a third instruction group, an offset value of data referenced by a reference instruction in the third instruction group is stored in a fourth array, and the fourth array is dedicated to storing the offset value of data referenced by the reference instruction in the third instruction group. Thus, the independence of data can be maintained in a running concurrency scene, and the memory overhead is reduced.
With reference to the first aspect or any implementation manner thereof, in another possible implementation manner, the first instruction set and/or the second instruction set are: a function, a combination of a preset number of instructions, or a combination of instructions corresponding to debug information.
With reference to the first aspect or any implementation manner thereof, in another possible implementation manner, the first array and/or the second array includes offset values of multiple types of data.
With reference to the first aspect or any implementation manner thereof, in another possible implementation manner, the offset values included in the first array are different; and/or, the second plurality of sets includes offset values that are different from each other.
In other words, the offset values in the same array may be deduplicated, which helps reduce redundant data, and thus the size of the bytecode file.
In a second aspect, a parsing method is provided, which may be performed by a parsing apparatus. Optionally, the parsing device may be an electronic device, or may be executed by a module or unit (such as a virtual machine) in the electronic device.
The method comprises the following steps: obtaining a byte code file, wherein the byte code file comprises a first instruction group and a second instruction group, the offset value of data referenced by a reference instruction in the first instruction group is stored in the first instruction group, the offset value of data referenced by the reference instruction in the second instruction group is stored in the second instruction group, and the second instruction group is created when the storage margin of the first instruction group is insufficient; and analyzing the byte code file.
A possible implementation manner, the parsing the byte code file includes: and analyzing the byte code file according to the file type of the byte code file.
With reference to the second aspect, in a possible implementation manner, the byte code file further includes a third array, where the third array includes an offset value of the first array and an offset value of the second array; the first instruction set includes an index of the first array in the third array; the second instruction set includes an index of the second array in the third array. The parsing byte code file includes: taking a first reference instruction in a first instruction group as an example, when the first reference instruction is analyzed, firstly, reading the offset of the first array from a third array according to the index of the first array in the first instruction group in the third array, then reading the offset corresponding to the index included in the reference instruction from the array pointed to by the offset of the first array, and then acquiring the referenced data according to the read offset.
With reference to the second aspect or any implementation manner thereof, in another possible implementation manner, the first instruction set includes an offset value of the first array; the second instruction set includes an offset value of the second array. The parsing byte code file includes: taking a first reference instruction in a first instruction group as an example, when the first reference instruction is analyzed, firstly reading the offset of the first array from the first instruction group, then reading the offset corresponding to the index included in the reference instruction from the array pointed by the offset of the first array, and then acquiring the referenced data according to the read offset.
With reference to the second aspect or any implementation manner thereof, in another possible implementation manner, a total amount of the offset values that the first array and/or the second array support store is less than or equal to 65536; the reference bit width of the reference instruction in the first instruction group and/or the reference bit width of the reference instruction in the second instruction group is less than or equal to 2 bytes.
With reference to the second aspect or any implementation manner thereof, in another possible implementation manner, the byte code file further includes a third instruction group, an offset value of data referenced by a reference instruction in the third instruction group is stored in a fourth array, and the fourth array is dedicated to storing the offset value of data referenced by the reference instruction in the third instruction group.
With reference to the second aspect or any implementation manner thereof, in another possible implementation manner, the first instruction set and/or the second instruction set is: a function, a combination of a preset number of instructions, or a combination of instructions corresponding to debug information.
With reference to the second aspect or any implementation manner thereof, in another possible implementation manner, the first array and/or the second array includes offset values of multiple types of data.
With reference to the second aspect or any implementation manner thereof, in another possible implementation manner, the offset values included in the first array are different; and/or, the second plurality of sets includes offset values that are different from each other.
In a third aspect, there is provided an electronic device comprising a module/unit for performing the method of the first aspect or any one of its possible designs, or a module/unit for performing the method of the second aspect or any one of its possible designs; these modules/units may be implemented by hardware, or may be implemented by hardware executing corresponding software.
In a fourth aspect, a chip is provided, which is coupled to a memory in an electronic device, and is configured to call a computer program stored in the memory and execute the technical solution of the first aspect or any of the possible designs thereof, or is configured to call a computer program stored in the memory and execute the technical solution of the second aspect or any of the possible designs thereof; wherein "coupled" means that the two elements are joined to each other, either directly or indirectly.
In a fifth aspect, a computer readable storage medium is provided, the computer readable storage medium comprising a computer program, which when run on an electronic device causes the electronic device to perform the technical solutions of the first aspect and any of the possible designs thereof as described above, or the technical solutions of the second aspect and any of the possible designs thereof as described above.
In a sixth aspect, there is provided a computer program comprising instructions which, when run on a computer, cause the computer to perform the technical solutions of the first aspect and any of its possible designs or the second aspect and any of its possible designs as described above.
In a seventh aspect, a processing system is provided, the system comprising any of the electronic devices described above.
It should be noted that the advantages of the second aspect to the seventh aspect may be referred to as the advantages of the first aspect or the embodiments thereof, and will not be described in detail.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
FIG. 2 is a schematic diagram of a compilation process.
Fig. 3 is an application scenario of virtual machine technology.
Fig. 4 is a schematic diagram of direct and indirect references.
FIG. 5 is a schematic diagram of one layout of a class file.
FIG. 6 is a flow of resolved instruction referencing of a class file.
Fig. 7 is a schematic diagram of a layout of the dex file.
Fig. 8 is an example of a byte code file format of an embodiment of the present application.
Fig. 9 is another example of a byte code file format of an embodiment of the present application.
Fig. 10 is a schematic flowchart of a compiling method according to an embodiment of the application.
Fig. 11 is a schematic flowchart of an parsing method provided in an embodiment of the present application.
Fig. 12 is a schematic diagram of the composition of the apparatus 10 according to an embodiment of the present application.
Fig. 13 is a schematic diagram of the composition of the apparatus 20 according to an embodiment of the present application.
Detailed Description
In order to facilitate understanding of the embodiments of the present application, the following description is made before describing the embodiments of the present application: in the embodiments of the present application, the first, second, third and various numbers are merely for convenience of description and are not intended to limit the scope of the embodiments of the present application, for example, to distinguish different arrays; the definitions of the many features of the present application set forth herein are provided solely for the purpose of illustrating the function of the features by way of example and for which reference is made to the prior art for details; the words "exemplary," "such as," "illustratively," "as (another) one example," and the like are used to indicate that any embodiment or design described as "example" in this disclosure should not be interpreted as preferred or advantageous over other embodiments or designs, but rather the use of the word "exemplary" is intended to present concepts in a concrete fashion; the terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise; "plurality" means two or more; "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a alone, a and B together, and B alone, wherein a, B may be singular or plural.
The compiling method and/or the analyzing method provided by the embodiment of the application can be realized through the electronic equipment, and the application is not limited to the specific type and the realization form of the electronic equipment. For example, the electronic device may be an electronic device such as a desktop computer, a smart phone, a tablet computer, a notebook computer, a personal computer (personal computer, PC), an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), a vehicle-mounted device, a wearable device, or a foldable device, or may be applied to a mobile device such as a car, a robot, or the like. Electronic devices include, but are not limited to, on-boardHong (Harmony OS) or other operating system.
By way of example, fig. 1 shows a schematic diagram of an electronic device 100. The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a compass 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 100. In other embodiments of the application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (IMAGE SIGNAL processor, ISP), a controller, a video codec, a digital signal processor (DIGITAL SIGNAL processor, DSP), a baseband processor, and/or a neural-Network Processor (NPU), etc. Wherein the different processing units may be separate components or may be integrated in one or more processors. In some embodiments, the electronic device 101 may also include one or more processors 110. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution. In other embodiments, memory may also be provided in the processor 110 for storing instructions and data. Illustratively, the memory in the processor 110 may be a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. This avoids repeated accesses and reduces the latency of the processor 110, thereby improving the efficiency of the electronic device 101 in processing data or executing instructions.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include inter-integrated circuit (inter-INTEGRATED CIRCUIT, I2C) interfaces, inter-integrated circuit audio (inter-INTEGRATED CIRCUIT SOUND, I2S) interfaces, pulse code modulation (pulse code modulation, PCM) interfaces, universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interfaces, mobile industry processor interfaces (mobile industry processor interface, MIPI), general-purpose input/output (GPIO) interfaces, SIM card interfaces, and/or USB interfaces, among others. The USB interface 130 is an interface conforming to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 101, or may be used to transfer data between the electronic device 101 and a peripheral device. The USB interface 130 may also be used to connect headphones through which audio is played.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store one or more computer programs, including instructions. The processor 110 may execute the above-described instructions stored in the internal memory 121. The internal memory 121 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage program area may also store one or more applications (such as gallery, contacts, etc.), etc. The storage data area may store data created during use of the electronic device 101 (e.g., photos, contacts, etc.), and so on. In addition, the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage units, flash memory units, universal flash memory (universal flash storage, UFS), embedded multimedia cards (Embedded Multi MEDIA CARD, EMMC), and the like. In some embodiments, processor 110 may cause electronic device 101 to perform methods, as well as other applications and data processing, provided in embodiments of the present application by executing instructions stored in internal memory 121, and/or instructions stored in a memory provided in processor 110. The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
Before describing embodiments of the present application, several concepts related to the embodiments of the present application are described.
1. Virtual machine technology
Files in computers are commonly stored in binary form, with the real demand for compatibility of different programming languages and different architectures, virtual machine technologies have grown, such as Java virtual machine (Java Virtual Machine, JVM), an Zhuoyun line (Android run, ART), etc.
In virtual machine technology, a compiler (also referred to as a virtual machine compiler) takes a source code file as input, and converts the source code file into a byte code file; the virtual machine takes the byte code file as input and executes instructions in the byte code file. The compiler may be a computer program stored in a developer terminal, and the virtual machine is an executable program on a user terminal (such as a mobile phone, a smart watch, or a tablet computer). The developer terminal and/or the user terminal may be an electronic device, such as electronic device 100 shown in fig. 1. The above-mentioned bytecode file may also be referred to as a binary file or a virtual machine file, etc., and for convenience of description, hereinafter collectively referred to as a bytecode file.
Compilation is the process of converting or translating one file (source file) into another (target file), and the process of converting a source code file into a bytecode file by a compiler is the process of compilation. Illustratively, as shown in FIG. 2, compiling may include the following major steps:
1) Resolver (resolver): parsing the source file into an intermediate representation (INTERMEDIATE REPRESENTATION, IR);
2) Optimization (optimization): optimizing the intermediate expression form obtained by analysis, such as dead code elimination, cyclic expansion, function inlining, code sinking and the like;
3) And a transmitter (emitter) for generating the file according to the format of the target file by using the optimized intermediate expression form.
In some implementations, the technical solution provided by the embodiments of the present application may be applied in the emitter stage.
By way of example, fig. 3 shows an application scenario of virtual machine technology. As shown in fig. 3, an application developer compiles a source code file into a bytecode file through a compiler, packages the bytecode file and other files (e.g., pictures, text, audio, video, etc.) into an application installation package, and then publishes the application installation package to an application platform. The user downloads an application installation package issued by an application developer on an application platform through the terminal, the terminal analyzes the byte code file from the application installation package, and invokes a virtual machine in the terminal to execute the byte code file, so that the starting and running of the application are completed.
2. Reference instruction
There are a number of instructions referencing data in the bytecode file described above, which may be collectively referred to as referencing instructions. Common reference instructions include the following two classes:
1) Instruction to load constant string
For example, the ldc instruction in a JVM bytecode file (i.e., class file), the const_string instruction in an ART bytecode file (i.e., dex file), etc.
2) Function call instruction
Such as the invokevirtual instruction in the class file, the invoke-virtual instruction in the dex file, etc.
Reference bit width of reference instruction: the reference instruction comprises an operation code and an address code, wherein the operation code is used for indicating the operation executed by the reference instruction or the function possessed by the reference instruction, the address code is used for indicating the position of data used by the reference instruction, and the length of the address code of the reference instruction can be also called the reference bit width of the instruction.
3. Direct reference and indirect reference
Since the reference instruction exists in a large amount in the byte code system file, the encoding mode of the referenced data in the reference instruction directly affects the volume and the execution performance of the byte code file. In the current byte code file format, there are two encoding modes of the reference instruction, namely direct reference and indirect reference.
Fig. 4 is a schematic diagram of direct and indirect references. The diagram (a) of fig. 4 shows direct references, i.e., direct references by the offset of the data in the bytecode file. For example, the offset of the data is 0x1A2B3C4D, and the reference instruction may be CALL 0x1A2B3C4D, i.e., the reference instruction references the data by the offset of the data. Fig. 4 (b) shows indirect referencing in which a byte code file has an array in which offsets of all data are recorded, and the data can be indirectly referenced by indexing the offsets of the data in the array. For example, the offset of the array is 0x0011, the array includes 6 offsets, the offset of the data is 0x1A2B3C4D, the offset of the data is 0x1 in the index of the array, and the reference instruction may be CALL 0x1.
Direct references are made to file formats commonly used for executable programs, such as executable and connectible formats (executable and linkable format, ELF) file formats. The benefit of direct referencing is that the speed of accessing the data is relatively faster, but the disadvantage is that the reference bit width is very long, e.g., the ELF needs to use 4 bytes to encode the data. Therefore, the size of the bytecode file generated using the direct reference encoding scheme tends to be large.
Indirect referencing can reduce the reference bit width, such as the class file format of JVM and the dex file format of ART, which both use 2 bytes of reference bit width to reference data. Indirect referencing sacrifices the ability to partially access data to reduce the volume of the bytecode file.
4. JVM bytecode file format
The JVM bytecode file format, i.e., class file format, uses indirect referencing and controls the reference bit width by strictly limiting the number of data.
FIG. 5 shows a schematic diagram of one layout of a class file. Offset 0, offset 1, offset 2, offset 3 …, offset 10 …, etc. in fig. 5 are offsets of data, and an array made up of these offsets is also called constant pool (constant pool) in the class file format, and each class file includes a constant pool. The file header (FILE HEADER) of the class file format includes an offset to the constant pool, which is at a fixed offset to the class file. The class file format specifies that the number of offsets in a constant pool cannot exceed 65536 (i.e., the maximum value that can be encoded by 2 bytes). The reference bits of the reference instruction in class file format are 2 bytes wide, invokevirtual x0002 and invokevirtual x000a as shown in fig. 5, where 0x0002 is the index of offset 2 in the constant pool and 0x000a is the index of offset 10 in the constant pool.
Each class in the JVM source code file (i.e., java file) is compiled into a class file, i.e., the number of class files compiled by each java file is equal to the number of classes in the java file. However, if the amount of data of a certain class in the JVM source code file exceeds 65536, i.e., exceeds the limit of the amount of offsets in the constant pool, the class will not be compiled into a class file or will be compiled into multiple class files, so that the reference bit width of the reference instruction remains at 2 bytes.
FIG. 6 illustrates the flow of resolved instruction referencing of a class file. Taking the referencing instruction invokevirtual x0002 as an example, as shown in fig. 6, when resolving invokevirtual x0002, firstly, the offset of the constant pool is read from the file header, then the offset 2 with the index value of 0x0002 is read from the array pointed to by the offset of the constant pool, and then the referenced data is obtained according to the offset 2.
5. ART byte code file format
The ART bytecode file format, which is a dex file format, also employs indirect referencing, similarly to the class file format, and controls the reference bit width of the referencing instruction by limiting the amount of data in one constant pool. The number of offsets in a constant pool cannot exceed 65536 (i.e., the maximum value that can be encoded by 2 bytes). Unlike the class file format, the ART source code file is compiled into a dex file.
However, if the amount of data in the ART source code file exceeds 65536, i.e. exceeds the limit of the amount of offset in the constant pool, then the developer is required to additionally configure the compilation option at the time of compilation to adapt the generated dex file to such a scenario:
1) The number of classes or functions exceeds 65536, the art source code file will be compiled into multiple dex files such that the reference bit width of the reference instruction remains at 2 bytes;
2) The number of character strings exceeds 65536, one processing way is to compile an ART source code file into a plurality of dex files, and the other processing way is to limit and relax the number of offsets in a constant pool to 65536 x 65536 (i.e. the maximum value encoded by 4 bytes), meanwhile, the reference bit width of a reference instruction can be relaxed to 4 bytes, namely, data within 65536, the reference bit width of the reference instruction is 2 bytes, data outside 65536, and the reference bit width of the reference instruction is 4 bytes.
Fig. 7 shows a schematic diagram of a layout of the dex file. Fig. 7 exemplifies that the number of character strings exceeds 65536. The header of the dex file format includes an offset of a constant pool, which is located at a fixed offset of the dex file. For the first 65536 offsets in the constant pool (i.e., offset 0-offset 0 xffff), the reference bit width of the reference instruction is 2 bytes, such as constraint-string 0x0002 in FIG. 7, where 0x0002 is the index of offset value 2 in the constant pool; for the 65537 th offset and subsequent offsets in the constant pool, the reference bit width of the reference instruction is 4 bytes, such as const-string-jumbo 0x0001ffff in FIG. 7, where 0x0001ffff is the index of offset 0x1ffff in the constant pool.
The flow of the analysis instruction reference of the dex file is similar to that of the class file, and is not repeated.
The data of a relatively large application easily exceeds the number limit of 65536, in which case, based on the layout of the current bytecode file, the application is either divided into a plurality of bytecode file stores or long reference bit widths (direct references as above, references to character strings of ART as shown in fig. 7, etc.) are employed. Dividing the application into a plurality of byte code files for storage increases the reading times of the byte code files, resulting in increased I/O overhead and performance degradation; the use of long reference bit widths can result in a large size of the bytecode file.
In view of the above problems, the embodiments of the present application provide a byte code file format capable of storing more data in a byte code file without increasing the reference bit width of a reference instruction.
Specifically, the byte code file format of the embodiment of the application supports using a plurality of constant pools to store offset values of data, and the length of each constant pool does not exceed a specified maximum length to control the reference bit width of a reference instruction, wherein the reference unit of a group of reference instructions comprises information of the constant pool to indicate that the offset of the data referenced by the reference instruction in the reference unit is stored in the constant pool. Thus, when the quantity of data in the source code file exceeds the length specified by a single constant pool, a plurality of constant pools can be used for storing the offset of the data, so that the source code file is not required to be compiled into a plurality of byte code files, thereby being beneficial to reducing I/O overhead and improving performance; and the volume of the byte code file is smaller because the reference bit width of the reference instruction is not increased.
In an embodiment of the present application, a set of reference instructions that are expected to correspond to the same constant pool are partitioned into a single reference unit. Illustratively, the referencing unit may include at least one of: a function, a preset number of instruction combinations, or a combination of instructions corresponding to debug information.
In an embodiment of the present application, a constant pool may include offset values of data referenced by one or more referencing units, i.e., one or more referencing units may correspond to a constant pool. The data in the same constant pool can be deduplicated, i.e., the offset values in the same constant pool are different to reduce redundant data.
The embodiment of the application is not limited to the type of the data corresponding to the offset value included in the constant pool, and one constant pool can include the offset values of a plurality of types of data and also can only include the offset value of a certain type of data. The types herein may include at least one of: a string, method, function, or class.
The creation mode of the constant pools in the embodiment of the application can be as follows: when the storage margin of the created constant pool is insufficient, a new constant pool is created. In addition, some referencing units may also correspond to separate constant pools, i.e., a constant pool is dedicated to storing offset values of data referenced by the referencing units, so that data independence can be maintained and memory overhead reduced in a runtime concurrency scenario.
Embodiments of the present application do not limit the total amount of offset values (or length of constant pool) that the constant pool supports for storage. Optionally, the constant pool supports a total amount of stored offset values of less than or equal to 65536, and correspondingly, a reference bit width of the reference instruction of less than or equal to 2 bytes. For example, the constant pool supports a total amount of stored offset values of 65536, and a reference bit width of the reference instruction of 2 bytes. For another example, the constant pool supports a total amount of stored offset values of 256, and a reference bit width of a reference instruction of 1 byte. In other words, according to the byte code file format of the embodiment of the present application, a length less than or equal to 65536 may be set for the constant pool, and the reference bit width of the reference instruction may be correspondingly less than or equal to 2 bytes. When the length of the constant pool is less than 65536 and the reference bit width of the reference instruction is less than 2 bytes, the volume of the bytecode file can be further reduced because a smaller reference bit width is used.
The specific implementation mode of the byte code file format of the embodiment of the application is as follows: the byte code file includes a constant pool table for storing offsets of one or more constant pools, and one or more reference units, each of the one or more reference units including an index (or offset value) of its corresponding constant pool in the constant pool table, reference instructions in the reference unit referencing data by an offset of the data in the constant pool. Alternatively, the constant pool table may be stored in the header of the byte code file. For this implementation, the flow of resolving instruction references is: firstly, reading an index of a constant pool in a reference unit in a constant pool table, reading an offset of the constant pool from the constant pool table by the index of the constant pool in the constant pool table, reading an offset corresponding to the index included in a reference instruction from an array pointed by the offset of the constant pool, and acquiring referenced data according to the read offset.
Fig. 8 is an example of a byte code file format of an embodiment of the present application. In fig. 8, reference units are taken as examples of functions. As shown in fig. 8, the byte code file includes a plurality of constant pools such as constant pool 0 and constant pool 1, and a constant pool table is used for recording the offset of each constant pool; function 1 includes an index of constant pool 1 in the constant pool table (i.e., 0x 1), which means that offsets of referenced data in function 1 are all stored in constant pool 1, and a referenced bit width of a referenced instruction in function 1 is 2 bytes, such as 0x0002 of referenced instruction 1 and 0x0003 of referenced instruction 2, where 0x0002 and 0x0003 are indexes of offsets of data in constant pool 1; function 2 includes an index of constant pool 0 in the constant pool table (i.e., 0x 0), which means that offsets of referenced data in function 2 are all stored in constant pool 0, and the referenced bit width of referenced instruction in function 2 is 2 bytes, such as 0x0002 of referenced instruction 3 and 0x0003 of referenced instruction 4, where 0x0002 and 0x0003 are indexes of offsets of data in constant pool 0.
Another specific implementation manner of the byte code file format of the embodiment of the application is as follows: the bytecode file includes one or more constant pools, one or more reference units, wherein each reference unit of the one or more reference units includes an offset value of its corresponding constant pool, and reference instructions in the reference unit reference data by an offset of the data in the constant pool. In this implementation, the offset of the constant pool may not be included in the header of the bytecode file. For this implementation, the flow of resolving instruction references may be: firstly, reading the offset of a constant pool in a reference unit, then reading the offset corresponding to the index included in the reference instruction from the array pointed by the offset of the constant pool, and then acquiring the referenced data according to the read offset. The implementation may read the constant pool offset value directly in the referencing unit, resolving instructions faster than if the referencing unit included an index of the constant pool in the constant pool table.
Fig. 9 is another example of a byte code file format of an embodiment of the present application. Unlike FIG. 8, the constant pool table is not included in the bytecode file, and the offset of the constant pool is included in the reference unit instead of the index of the constant pool in the constant pool table. Specifically, taking the reference unit as a function as an example, as shown in fig. 9, the byte code file includes a plurality of constant pools such as constant pool 0 and constant pool 1; function 1 includes offsets of constant pool 1, which means that offsets of referenced data in function 1 are all stored in constant pool 1, and a referenced bit width of a referenced instruction in function 1 is 2 bytes, such as 0x0002 of referenced instruction 1 and 0x0003 of referenced instruction 2, where 0x0002 and 0x0003 are indexes of offsets of data in constant pool 1; function 2 includes offsets of constant pool 0, which means that offsets of referenced data in function 2 are all stored in constant pool 0, and the referenced bit width of referenced instruction in function 2 is 2 bytes, such as 0x0002 of referenced instruction 3 and 0x0003 of referenced instruction 4, where 0x0002 and 0x0003 are indexes of offsets of data in constant pool 0.
It should be noted that, in the embodiment of the present application, the array storing the offset of the data is referred to as a constant pool, and the array storing the offset of the constant pool is referred to as a constant pool table, but in reality, they may be also referred to as other names, and their names may be changed according to different application scenarios.
By adopting the byte code file format provided by the application, more data can be stored in one byte code file. For most applications, the application can be compiled into one byte code file by adopting the byte code file format provided by the application, which is beneficial to reducing I/O overhead and improving performance. The byte code file format provided by the application can not increase the reference bit width of the reference instruction, the volume of the byte code file is smaller, the speed of reading the reference instruction is faster, and the performance is better.
Fig. 10 is a schematic flowchart of a compiling method according to an embodiment of the application. The method 1000 shown in fig. 10 may be performed by a compiling apparatus, or may be performed by a module or unit (e.g., a compiler or a virtual machine compiler) in the compiling apparatus. Alternatively, the compiling apparatus may be a developer terminal, which may be an electronic device as shown in fig. 1. The instruction sets in method 1000, such as the first instruction set, the second instruction set, and the third instruction set, may correspond to the referencing units described above; the first array, the second array, and the fourth array may correspond to the constant pool above; the third array may correspond to the constant pool table above.
In step 1010, a source code file is obtained.
In step 1020, the source code file is compiled to obtain a bytecode file.
The embodiment of the application is not limited to a specific implementation manner of compiling, for example, the parser, the optimizing and the transmitter described in fig. 2 may be included, so long as the byte code file satisfying the byte code file format provided by the application can be generated.
The byte code file comprises a first instruction group and a second instruction group, wherein the offset value of data referenced by a reference instruction in the first instruction group is stored in the first array, the offset value of data referenced by the reference instruction in the second instruction group is stored in the second array, and the second array is created when the storage margin of the first array is insufficient. For example, assuming that the number of data referenced by the first instruction set is 50 and the maximum length of the first array is limited to 65536, when compiling the first instruction set, the first array has already stored an offset of 65500 data, the memory margin of the first array is 36, i.e. the first array may also store 36 offsets, which is less than 50, in which case a new second array may be created to store the offset value of the data referenced by the first instruction set.
Optionally, the byte code file further includes a third array including an offset value of the first array and an offset value of the second array; the first instruction set includes an index of the first array in the third array; the second instruction set includes an index of the second array in the third array. The detailed description may refer to fig. 8, and will not be repeated here.
Optionally, the first instruction set includes an offset value of the first array; the second instruction set includes an offset value of the second array. The detailed description may refer to fig. 9, and will not be repeated here.
Optionally, the first array and/or the second array supports a total amount of stored offset values of less than or equal to 65536; the reference bit width of the reference instruction in the first instruction group and/or the reference bit width of the reference instruction in the second instruction group is less than or equal to 2 bytes. For example, the first array and/or the second array support a total amount of stored offset values of 65536, and correspondingly, reference instructions therein have a reference bit width of 2 bytes. For another example, the first array and/or the second array support a total amount of stored offset values of 256, and correspondingly, reference instructions therein have a reference bit width of 1 byte.
Optionally, the byte code file further includes a third instruction group, an offset value of data referenced by the reference instruction in the third instruction group is stored in a fourth array, and the fourth array is dedicated to storing the offset value of data referenced by the reference instruction in the third instruction group. In this way, independence of the data referenced in the third instruction set can be maintained in a runtime concurrency scenario, and memory overhead is reduced.
Optionally, the first instruction set and/or the second instruction set are: a function, a combination of a preset number of instructions, or a combination of instructions corresponding to debug information.
Optionally, the first array and/or the second array comprises offset values for multiple classes of data.
Optionally, the first array includes offset values that are different from each other; and/or the second set comprises offset values that are different. In other words, offset values in the same array may be deduplicated. For example, taking compiling the first instruction set as an example, when compiling the first instruction set, it may be detected whether the offset value of some or all of the data referenced by the first instruction set is already included in the first array, and if so, it may be directly used.
In addition, in the scenario shown in fig. 3, after the bytecode file is obtained, an application installation package may be further generated according to the bytecode file.
The description of the bytecode file in method 1000 may be further referenced above and will not be described in detail herein.
Fig. 11 is a schematic flowchart of an parsing method provided in an embodiment of the present application. The method 1100 shown in fig. 11 may be performed by a parsing apparatus or by a module or unit (e.g., a virtual machine) in the parsing apparatus. Alternatively, the parsing means is a user terminal, which may be an electronic device as shown in fig. 1. The instruction sets in method 1100, such as the first instruction set, the second instruction set, and the third instruction set, may correspond to the referencing units described above; the first array, the second array, and the fourth array may correspond to the constant pool above; the third array may correspond to the constant pool table above.
Step 1110, a bytecode file is obtained.
For the scenario shown in fig. 3, the bytecode file may be obtained by parsing the application installation package, for example.
The byte code file comprises a first instruction group and a second instruction group, wherein the offset value of data referenced by a reference instruction in the first instruction group is stored in the first array, the offset value of data referenced by the reference instruction in the second instruction group is stored in the second array, and the second array is created when the storage margin of the first array is insufficient.
In step 1120, the bytecode file is parsed.
Optionally, the byte code file further includes a third array including an offset value of the first array and an offset value of the second array; the first instruction set includes an index of the first array in the third array; the second instruction set includes an index of the second array in the third array. In this case, parsing the bytecode file may include: taking a first reference instruction in a first instruction group as an example, when the first reference instruction is analyzed, firstly, reading the offset of the first array from a third array according to the index of the first array in the first instruction group in the third array, then reading the offset corresponding to the index included in the reference instruction from the array pointed to by the offset of the first array, and then acquiring the referenced data according to the read offset.
Optionally, the first instruction set includes an offset value of the first array; the second instruction set includes an offset value of the second array. In this case, parsing the bytecode file may include: taking a first reference instruction in a first instruction group as an example, when the first reference instruction is analyzed, firstly reading the offset of the first array from the first instruction group, then reading the offset corresponding to the index included in the reference instruction from the array pointed by the offset of the first array, and then acquiring the referenced data according to the read offset.
The description of the bytecode file in method 1100 may be referred to above and will not be described in detail herein.
The method provided by the present application is described above in connection with fig. 10 and 11, and the device embodiment of the present application will be described below in connection with fig. 12 and 13.
It will be appreciated that, in order to implement the functions of the above-described method embodiments, the apparatus in fig. 12 and 13 includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the various illustrative elements and method steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application can divide the functional modules of the device according to the embodiment of the method, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one module. The modules may be implemented in hardware or in software functional modules. It should be noted that, in the embodiment of the present application, the division of the modules is schematic, which is merely a logic function division, and other division manners may be implemented in actual implementation.
Fig. 12 shows a schematic diagram of the composition of the apparatus 10 according to the embodiment of the present application, as shown in fig. 12, the apparatus 10 includes: an acquisition unit 11 and a processing unit 12.
When the apparatus 10 is used for implementing the method shown in fig. 10, the obtaining unit 11 is configured to obtain the source code file, i.e. perform step 1000 of fig. 10. The processing unit 12 is configured to compile the source code file to obtain a bytecode file, i.e. execute step 1010 of fig. 10.
When the apparatus 10 is used for implementing the method shown in fig. 11, the obtaining unit 11 is configured to obtain the bytecode file, i.e., perform step 1100 of fig. 11. The processing unit 12 is configured to parse the bytecode file, i.e. execute step 1110 of fig. 11.
It should be noted that, all relevant contents of each step related to the above method embodiment may be cited to the functional description of the corresponding functional module, which is not described herein. The apparatus 10 provided in the embodiment of the present application is configured to perform the method described above, so that the same effects as those of the method described above can be achieved.
Fig. 13 shows a schematic diagram of the composition of the apparatus 20 according to an embodiment of the present application, and as shown in fig. 12, the apparatus 20 includes a processor 21. Processor 21 is coupled to memory 23, memory 23 for storing instructions. When the apparatus 20 is used to implement the method described above, the processor 21 is configured to execute instructions in the memory 23 to implement the functions of the processing unit 12 described above.
Optionally, the apparatus 20 further comprises a memory 23.
Optionally, the apparatus 20 further comprises an interface circuit 22. The processor 21 and the interface circuit 22 are coupled to each other. It is understood that the interface circuit 22 may be a transceiver or an input-output interface. When the apparatus 20 is used to implement the method described above, the processor 21 is configured to execute instructions to implement the functions of the processing unit 12 described above, and the interface circuit 22 is configured to implement the functions of the acquisition unit 11 described above.
The embodiment of the application also provides electronic equipment, which comprises: processor, memory, application program, and computer program. The devices described above may be connected by one or more communication buses. Wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, the one or more computer programs comprising instructions that can be used to cause the electronic device to perform the various steps of the method embodiments described above.
Illustratively, the processor may be specifically the processor 110 shown in fig. 1, and the memory may be specifically the internal memory 120 shown in fig. 1 and/or an external memory connected to the electronic device.
The embodiment of the application also provides a chip, which comprises a processor and a communication interface, wherein the communication interface is used for receiving signals and transmitting the signals to the processor, and the processor processes the signals so that the compiling method or the analyzing method is executed in any one of the possible implementation manners.
The present embodiment also provides a computer-readable storage medium having stored therein computer instructions that, when executed on an electronic device, cause the electronic device to execute the above-described related method steps to implement the compiling method or the parsing method in the above-described embodiments.
The present embodiment also provides a computer program product which, when run on a computer, causes the computer to perform the above-described relevant steps to implement the compiling method or the parsing method in the above-described embodiments.
In addition, embodiments of the present application also provide an apparatus, which may be embodied as a chip, component or module, which may include a processor and a memory coupled to each other; the memory is configured to store computer-executable instructions, and when the device is running, the processor may execute the computer-executable instructions stored in the memory, so that the chip executes the compiling method or the parsing method in the above method embodiments.
The embodiment of the application also provides a processing system, which comprises any device provided by the embodiment.
The explanation and beneficial effects of the related content in any of the above-mentioned devices can refer to the corresponding method embodiments provided above, and are not repeated here.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (20)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310154098.5A CN118519637A (en) | 2023-02-17 | 2023-02-17 | A compiling method, parsing method and device |
| PCT/CN2023/139471 WO2024169376A1 (en) | 2023-02-17 | 2023-12-18 | Compiling method, analyzing method, and apparatus |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202310154098.5A CN118519637A (en) | 2023-02-17 | 2023-02-17 | A compiling method, parsing method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN118519637A true CN118519637A (en) | 2024-08-20 |
Family
ID=92280012
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202310154098.5A Pending CN118519637A (en) | 2023-02-17 | 2023-02-17 | A compiling method, parsing method and device |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN118519637A (en) |
| WO (1) | WO2024169376A1 (en) |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6857063B2 (en) * | 2001-02-09 | 2005-02-15 | Freescale Semiconductor, Inc. | Data processor and method of operation |
| US20100312991A1 (en) * | 2008-05-08 | 2010-12-09 | Mips Technologies, Inc. | Microprocessor with Compact Instruction Set Architecture |
| CN103914326B (en) * | 2014-04-21 | 2017-02-08 | 飞天诚信科技股份有限公司 | Method and device for efficiently updating JAVA instruction |
| CN104978182B (en) * | 2014-10-15 | 2018-05-22 | 武汉安天信息技术有限责任公司 | A kind of method and system that jar file is parsed into java |
| CN111782270B (en) * | 2020-06-09 | 2023-12-19 | Oppo广东移动通信有限公司 | A data processing method, device and storage medium |
| CN111880806B (en) * | 2020-07-23 | 2023-11-21 | 无锡融卡科技有限公司 | Application execution method and application execution system |
| CN112631722A (en) * | 2020-12-24 | 2021-04-09 | 北京握奇数据股份有限公司 | Byte code instruction set simplifying method and system |
-
2023
- 2023-02-17 CN CN202310154098.5A patent/CN118519637A/en active Pending
- 2023-12-18 WO PCT/CN2023/139471 patent/WO2024169376A1/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024169376A1 (en) | 2024-08-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110675256B (en) | Method and apparatus for deploying and executing smart contracts | |
| US7376781B2 (en) | Virtual USB card reader with PCI express interface | |
| CN111651384B (en) | Register reading and writing method, chip, subsystem, register set and terminal | |
| CN112631657B (en) | Byte comparison method and instruction processing device for string processing | |
| US20030101208A1 (en) | JAVA DSP acceleration by byte-code optimization | |
| CN101751273A (en) | Safety guide device and method for embedded system | |
| US20240176914A1 (en) | Data Processing Method and Related Apparatus | |
| CN108694052A (en) | A kind of firmware upgrade method, device for upgrading firmware and firmware upgrade system | |
| US20080288919A1 (en) | Encoding of Symbol Table in an Executable | |
| CN105630530A (en) | Multilevel boot method and system of digital signal processor | |
| CN101794219A (en) | Compression method and device of .net files | |
| CN116467015B (en) | Mirror image generation method, system start verification method and related equipment | |
| US10656926B2 (en) | Compact type layouts | |
| CN118519637A (en) | A compiling method, parsing method and device | |
| US20140214434A1 (en) | Method for processing sound data and circuit therefor | |
| US20070079015A1 (en) | Methods and arrangements to interface a data storage device | |
| CN1514379A (en) | Mobile information device possessing embedded open platform system structure and its extension method | |
| CN114780120B (en) | Upgrade method, equipment and storage medium | |
| US7314180B2 (en) | Memory card and reproducing apparatus | |
| CN116521487A (en) | Log printing method, electronic device and storage medium | |
| CN101770368B (en) | Compressing method and compressing device of namespace in .net file | |
| CN116483736B (en) | Pile inserting position determining method and electronic equipment | |
| CN1234213C (en) | Apparatus and method for data transmission between storage medium and mobile communication terminal | |
| CN101794221A (en) | Compression method and device of reference types in .net file | |
| CN118672587A (en) | Intelligent contract compiling method, system, electronic equipment and object code running method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |