TW200529202A - Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium - Google Patents
Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium Download PDFInfo
- Publication number
- TW200529202A TW200529202A TW094105743A TW94105743A TW200529202A TW 200529202 A TW200529202 A TW 200529202A TW 094105743 A TW094105743 A TW 094105743A TW 94105743 A TW94105743 A TW 94105743A TW 200529202 A TW200529202 A TW 200529202A
- Authority
- TW
- Taiwan
- Prior art keywords
- information
- text
- style
- data
- item
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000008859 change Effects 0.000 claims abstract description 27
- 239000000872 buffer Substances 0.000 claims description 41
- 230000000694 effects Effects 0.000 claims description 17
- 239000000463 material Substances 0.000 claims description 14
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 11
- 230000001172 regenerating effect Effects 0.000 claims description 10
- 238000011069 regeneration method Methods 0.000 claims description 8
- 230000008929 regeneration Effects 0.000 claims description 7
- 239000003086 colorant Substances 0.000 claims description 5
- 239000011159 matrix material Substances 0.000 claims description 5
- 230000000717 retained effect Effects 0.000 claims description 4
- 230000001360 synchronised effect Effects 0.000 claims description 3
- 241000255925 Diptera Species 0.000 claims 1
- 208000001613 Gambling Diseases 0.000 claims 1
- 238000005034 decoration Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 claims 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 claims 1
- 239000010931 gold Substances 0.000 claims 1
- 229910052737 gold Inorganic materials 0.000 claims 1
- 241000894007 species Species 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 46
- 238000009877 rendering Methods 0.000 description 12
- 230000002452 interceptive effect Effects 0.000 description 5
- 230000036316 preload Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 229910021532 Calcite Inorganic materials 0.000 description 1
- 206010011469 Crying Diseases 0.000 description 1
- 102100035353 Cyclin-dependent kinase 2-associated protein 1 Human genes 0.000 description 1
- 206010011906 Death Diseases 0.000 description 1
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 1
- 206010061218 Inflammation Diseases 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 210000000078 claw Anatomy 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004054 inflammatory process Effects 0.000 description 1
- 230000005389 magnetism Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
- G11B20/10—Digital recording or reproducing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
- G11B2220/2537—Optical discs
- G11B2220/2541—Blu-ray discs; Blue laser DVR discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/84—Television signal recording using optical recording
- H04N5/85—Television signal recording using optical recording on discs or drums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/8042—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/804—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
- H04N9/806—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal
- H04N9/8063—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components with processing of the sound signal using time division multiplex of the PCM audio and PCM video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
- H04N9/8227—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being at least another television signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/79—Processing of colour television signals in connection with recording
- H04N9/80—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
- H04N9/82—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
- H04N9/8205—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
- H04N9/8233—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Television Signal Processing For Recording (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
- Studio Circuits (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
200529202 16252pif.doc 九、發明說明: . 【發明所屬之技術領域】 , 本發明是有關於一種多媒體影像的再生,且特別是有 關於一種用以記錄多媒體影像串流與文字式字幕串流的儲 存媒體以及將記錄於此儲存媒體的文字式字幕串流進行再 生的再生裝置與再生方法。 【先前技術】 φ 為了提供高密度(high-denSity)多媒體影像,視訊串 流、音訊串流、提供字幕的播放圖形串流以及提供按鈕與 遥單與使用者互動的互動圖形串流會被多路傳輸至一主要 串流並記錄在儲存媒體上,其中此主要串流也就是所知的 音訊視覺” A V”資料串流。特別是提供字幕的播放圖形串流 也提供點陣圖影像以為了在影像上顯示字幕或標題。 除了其大尺寸外,點陣圖標題資料在字幕或標題資料 的製作上會有問題且要對已製作的標題資料作編輯也會相 當困難。這是因為標題資料與其他串流資料(像是視^、 • 9 όΐΐ與互動圖形串流)一起多路傳輪。再者,另一問題是 無法以各種方式來改變標題資料的輸出樣式,也就是無^ 將種“題的輸出樣式改變為另一種輸出樣式。 【發明内容】 本發明的目的就是提供記錄文字式字幕串流的儲存媒 體,以及再生記錄於此儲存媒體的文字式字幕資料 ^ 裝置與方法。 、』丹生 根據本發明的目的就是提供一種從儲存影像資料與文 200529202 16252pif.doc 字式字幕_存媒體再生資料的裝置,其係在—影 據此影像資料來顯示標題,此裝置包括:視訊解碼器,^ 係用以解碼影像資料;以及字幕觸器,其制以依據樣 式資訊將播放資訊轉換為點陣圖影像,並控制已轉換播放 資訊的輸出與已解碼影像資料同步,其中文字式資料包括 係為顯示標題的單元的播放資訊以及指定標題的輸出樣式 的樣式貢訊。200529202 16252pif.doc IX. Description of the invention: [Technical field to which the invention belongs] The present invention relates to the reproduction of a multimedia image, and in particular to a storage for recording a multimedia image stream and a text subtitle stream A medium, a reproduction device and a reproduction method for reproducing a text subtitle stream recorded on the storage medium. [Previous technology] φ In order to provide high-density multimedia images, video streams, audio streams, subtitled graphics streams, and interactive graphics streams that provide buttons and remote interaction with users It is transmitted to a main stream and recorded on a storage medium. This main stream is also known as the audiovisual "AV" data stream. In particular, playback graphics streams that provide subtitles also provide bitmap images to display subtitles or titles on the images. In addition to its large size, the bitmap title data will have problems in the production of subtitles or title data and it will be quite difficult to edit the title data that has already been produced. This is because the title data is multiplexed with other streaming data, such as video ^, • 9 όΐΐ and interactive graphics streaming. Moreover, another problem is that the output style of the title data cannot be changed in various ways, that is, the output style of the "question" is changed to another output style. [Summary of the Invention] The object of the present invention is to provide a recorded text format. Storage medium for subtitle stream, and apparatus and method for reproducing text subtitle data recorded on the storage medium. "Dansheng aims to provide a method for storing subtitles from stored image data and text 200529202 16252pif.doc according to the present invention. A device for media reproduction data, which is based on the video data to display the title, this device includes: a video decoder, ^ is used to decode the image data; and a caption trigger, which is based on the style information to convert the playback information It is a bitmap image, and controls the output of the converted playback information to be synchronized with the decoded image data. The textual data includes the playback information of the unit that displays the title and the style message that specifies the output style of the title.
子幕解碼器解碼與影像貨料分開記錄的文字式字幕並 輸出字幕資料,其係覆蓋字幕資料在已解碼的^像資= 上。樣式資訊與播放資訊是以完成封包的元件資料流 (packetized elementary streams,PESs )的單元來米成,且 字幕解碼器在PESs的單元中語法分析與處理樣^資訊與 播放貨訊。 ' 樣式資訊是以一個PES形成且記錄在字幕資料的前面 部分,而數個播放資訊項目會以PESs的單亓却样*接斗、 資訊之後,而且字幕解碼器會應用一個樣式資訊項'目至^ 些播放資訊項目。 、、 此外’播放資訊包括指示標題内容的文字資訊以及控 制藉由轉換文字資訊所獲得的點陣圖影像輪出的组合^ 訊,且其中當藉由參考該組合資訊輸出已轉換的文字^訊 時字幕解碼器會控制此時間。 、° 播放資訊會指定一個或多個視窗區域,其中視窗區域 疋標過輸出在螢幕上的區域,且字幕解碼器會在同一;严= 輸出已轉換文字資訊在一個或多個視窗中。 、 7 200529202 16252pif.doc 在組合資訊之中的播放資訊的輸出開始時間與輪出钟 jwThe sub-screen decoder decodes the text subtitles recorded separately from the image data and outputs the subtitle data, which is overwriting the subtitle data on the decoded image data. The style information and playback information are based on the completed packetized elementary streams (PESs), and the subtitle decoder parses and processes the samples in the PESs. 'The style information is formed by a PES and recorded in the front part of the subtitle data, and several playback information items will be displayed in the form of PESs. * After the information, the subtitle decoder will apply a style information item. To ^ some playback information items. In addition, 'playback information includes text information indicating the content of the title and control of the combination of bitmap image rotation obtained by converting the text information, and wherein the converted text is output by referring to the combination information. The subtitle decoder controls this time. , ° Playback information will specify one or more window areas, where the window area is marked on the screen and the subtitle decoder will be the same; Strict = Output converted text information in one or more windows. , 7 200529202 16252pif.doc output start time and turn-out clock jw
束時間在全域時間軸上定義成時間資訊,其中全域日^間: 是使用在播放清單中,且播放清單是影像資料的^生^ 兀,並且字幕解碼器會將已轉換文字資訊的輸出與已解^ 影像資料的輸出同步,其係藉由參考輸出開始時間與轸出 結束時間。 倘若目前再生的播放資訊項目的輸出結束時間相同於 下=個播放資訊項目的輸出開始時間時,則字幕解碼器會 連續地再生此兩個播放資訊項目。 曰 倘若下一個播放資訊沒有要求連續再生時,則字幕解 ,器會在輸出開始時間與輸出結束時間之間重置間隔緩衝 為’且倘若要求連續再生時,則會保留間隔緩衝器,而不 重置。 樣式貧訊是輸出樣式的集合,而輸出樣式藉由儲存媒 二的生產商預先定義且應用至播放資訊中,且字幕解碼器 =依據樣式資訊轉換記錄在樣式資訊之後的播放資訊項目 至點陣圖影像。 此外’在播放資訊之中的文字資訊包括欲轉換為點陣 ^衫像的文字以及欲應用至文字的唯一部分的線内樣式資 5且ΐ幕解碍器應用藉由應用文字的線内樣式資訊唯 4 /刀至藉由生產商預先定義的樣式資訊來提供一功能以 加強文字的部分。 子幕解碼杰應用預先定義字體資訊的相對值或包括在 生產商預先定義的樣式資訊中的預先定義絕對值至文字 200529202 16252pif.doc 的部分’作為線内樣式資訊。 此外,樣式資訊更包括使用者可改 從使用者接收可改變樣式f訊項目 二貝,且在 資訊之後,字幕解碼器會應用由生產生預:f式的選f 訊’之後應用線内樣式資訊,且之後後疋、的樣式貢 訊的使用者可改Μ式資訊項目至文字。〜用對應選擇資 字幕解碼器應用在由生產商預先定義; 之中的縣定義字體資訊的相對值至 ;目 改變樣式資訊。 作為使用者可 由生產商預先定義的樣式資訊外儲存媒體容 =疋義在再生裝置中的預先定義樣式資訊時,暮 益會應用預先定義樣式資訊至文字。 、 馬 此外’樣式資訊包括色彩調色板的集合 貨訊:並且字幕解碼器依據定義在色彩調色板:顏= 樣式貝巧後的所有播放f訊項目轉換為點陣圖影: 播放資訊更包括色彩調色板的集合與顏色更新旗 八係與包括在樣式資訊中的色彩調色板的集合分開^ 若顏色更新旗標設定為,,丨,,時,則字幕解碼器會應用= 播放資訊中的色彩調色板的集合,以及倘若顏 設定為”〇,,時,财幕解·會應用包括在樣式f訊中 彩調色板的原先集合。 、巴 藉由設定該顏色更新旗標至”丨,,且逐漸改變包括 縯播放貢訊項目巾的色彩調色板的透明值,使得字 器實作淡人/淡出效果,且#淡人/淡出效果完成時,貝;^ 200529202 16252pif.doc t石據包括在樣式資訊中的色彩調色板的原先集 "此夕色彩對照表(color l〇〇k-up table,CLUT )。 , 樣式資A包括指示視窗區域的位置的區域資 於將播5於欲輸出在影像上的已轉換播放資訊,以及用 解石馬器會影像所需的字體資訊’且字幕 轉換為“與字體f訊將已轉換播放資訊 置、輸‘方:匕:·^ -個已轉換播放資訊的輸出開始位 或顏色,3 行間隔、字體識別字、字體大小、 換為點陣圖影It幕解碼器會依據字體資訊將播放資訊轉 資料^己錄單元_^=貝仙案中,其係錯存影像 以及再生影像資料之前字幕解碼器緩衝字幕資料 二卜,器參考的字體樓案。 幕資料項目存媒體上猶支援數種語言的數個字 _資訊;在:ί:Γ:器會從使用者接收預期:的 幕資料。 貝々項目之中再生對應選擇資訊的字 ,幕的儲存媒體上儲:影像資料與文字 影像資料顯示標題,此方、去勺括.έ、係在影像上依據 式資訊與播故資訊 樣二·:碼影像資料;讀取樣 據樣式胃轉_資轉換為點陣 200529202 16252pif.doc 二步:ΐ中輪出f,影像資 資訊以及指定標題的輸出i式的:題的早爾放 旦“,ί:明的又一目的是提供一種儲存媒體,其係儲存: 二旦^二以及文字式字幕資料’其制以依據影像資料 f衫像上如標題’其中字幕資料包括:〆個樣式資訊, [糸^指不標題的輸出樣式;以及數個播放資訊項目, ϊ=:Γ標題的單元’且字幕資料是與影像資料分開且 本發明的其他目的與優勢將在以下詳細描述,並 由本發明的實施例習得。 曰 【實施方式】 ^襄本餐明之上述和其他目的、特徵、和優點能更明 、 下文特舉一較佳實施例,並配合所附圖式,作詳 細說明如下。 :參照目1,根據本發明範例實施例儲存媒體(例如 媒體23G)係多層方式構相管理記錄於其上 娃夕” i影像串流的多媒體資料結構100。多媒體資料結 構100包括剪輊】〗〇、德妨、主σσ ’、 、 、、、口 m甘ίί 清早0、電影物件130與目錄 ^ 120。⑽11G為多媒體影像的紀錄單元、播放清 媒體影像的再生單元、電影物件⑽包括用來 生的像的導航指令且目錄表140用來指定首先再 勺2衫物件以及電影物件130的標題。 男輯110會實作成一個物件,其係包括用於高影像品 200529202 16252pif.doc 質電影的音訊視訊(audio-visual,AV )資料串流的剪輯 AV串流以及用於對應此AV資料串流的剪輯資訊114。例 如’可根據像疋動悲影像壓縮標準(Motion Picture Experts Group,MPEG)來壓縮AV資料串流。然而,在本發明目 的中此剪輯110不需要壓縮AV資料串流112。此外,剪 輯資訊114包括AV資料串流丨丨2的音訊/視訊屬性、進入The bundle time is defined as time information on the global time axis, where the global daytime: is used in the playlist, and the playlist is the image data of the video data, and the subtitle decoder will output the converted text information and ^ The output synchronization of the image data has been solved by referring to the output start time and the output end time. If the output end time of the currently reproduced play information item is the same as the output start time of the next play information item, the subtitle decoder will continuously reproduce these two play information items. That is, if the next playback information does not require continuous reproduction, the subtitle solution will reset the interval buffer to 'between the output start time and the output end time, and if continuous reproduction is required, the interval buffer will be retained without Reset. The style poor is a collection of output styles, and the output style is predefined by the manufacturer of the storage medium and applied to the playback information, and the subtitle decoder = converts the playback information items recorded after the style information to the dot matrix according to the style information Figure image. In addition, the text information in the playback information includes the text to be converted into a dot matrix shirt image and the inline style information to be applied to the only part of the text. Information only 4 / knife to provide a function to enhance the text part by the manufacturer's predefined style information. The sub-screen decoder applies the relative value of the predefined font information or the part of the predefined absolute value included in the predefined style information of the manufacturer to the text 200529202 16252pif.doc ’as the inline style information. In addition, the style information also includes that the user can receive a changeable style message from the user. After the information, the subtitle decoder will apply the pre-generated: f-type selection message and then apply the in-line style. Information, and the user of the style tribute can change the M-type information item to text. ~ Corresponding selection information The subtitle decoder is applied in advance by the manufacturer; among the counties defines the relative value of the font information to; purpose to change the style information. As a user, the manufacturer can store pre-defined style information outside the media content. When meaning the pre-defined style information in the reproduction device, Twilight will apply the pre-defined style information to the text. , Ma In addition 'style information includes a collection of color palettes: and the subtitle decoder is defined in the color palette according to the definition: color = all styles after the playback of F items are converted to bitmap shadows: playback information more The set including the color palette and the color update flag are separated from the set of color palettes included in the style information ^ If the color update flag is set to ,,,,,, then the subtitle decoder will apply = Play The collection of color palettes in the information, and if the color is set to "0,", the screen solution will apply the original collection of color palettes included in the style f., By setting the color update flag Mark to "丨", and gradually change the transparency value of the color palette including the play and play Gongxun project towel, so that the font implements the fade-in / fade-out effect, and when # fade-in / fade-out effect is completed, ^ 200529202 16252pif.doc is based on the original set of color palettes included in the style information " color l00k-up table (CLUT). The style information A includes the area indicating the position of the window area. The converted playback information will be broadcasted on the image to be output, and the font information required for the image will be displayed with a calcite horsecraft. The f message sets the converted playback information to the input side: Dagger: · ^-A converted start bit or color of the converted playback information, 3-line interval, font recognition word, font size, and bitmap shadow. It screen decoding The player will convert the playback information into data according to the font information. ^ 自 录 Unit _ ^ = In the Beixian case, it is the subtitle decoder buffering the subtitle data before the image is stored incorrectly and the image data is reproduced. The font reference case is referenced by the device. The item storage media still supports several words in several languages_information; in: ί: Γ: The device will receive the expected information from the user: The words corresponding to the selected information are reproduced in the project. The storage medium of the curtain Upper storage: display title of image data and text image data, here are all the details. Please refer to the image-based information and broadcast information on the image. Example 2: Code image data; Read the sample style. For dot matrix 20052 9202 16252pif.doc Two-step: the output of f in the middle of the round, the output of the image information and the specified title i-style: the title of the early release ", ί: Ming another purpose is to provide a storage medium, which stores: Erdan ^ 2 and text subtitle data 'the system is based on the image data f shirt image as the title', where the subtitle data includes: a style information, [糸 ^ refers to the output style without title; and several playback information items, ϊ =: Γ title unit 'and the subtitle data is separate from the image data and other objects and advantages of the present invention will be described in detail below and learned from the embodiments of the present invention. [Embodiment] The above and other objects, features, and advantages of the present meal can be made clearer. A preferred embodiment is given below, and it will be described in detail with the accompanying drawings. : Referring to Item 1, according to an exemplary embodiment of the present invention, a storage medium (eg, media 23G) is a multi-layered structured management record recorded thereon. The multimedia data structure 100 for video streaming. The multimedia data structure 100 includes a cut]. 〇, Germany, main σσ ',,, ,,,,,,,,,,,,,,,,,,,,,,, 0, movie object 130 and directory ^ 120. ⑽ 11G is a recording unit for multimedia images, a playback unit for playing media images, and movie objects. The raw image navigation instructions and table of contents 140 are used to specify the titles of the first two shirt objects and the movie object 130. The male series 110 will be implemented as an object, which includes a high-quality movie 200529202 16252pif.doc quality movie Clip AV stream of audio-visual (AV) data stream and clip information 114 corresponding to the AV data stream. For example, 'can be based on the Motion Picture Experts Group (MPEG) To compress the AV data stream. However, the clip 110 does not need to compress the AV data stream 112 for the purpose of the present invention. In addition, the clip information 114 includes an AV data stream 丨 2 Audio / video properties, enter
點地圖等,其中關於隨機存取進入點的位置的資訊以預先 定義磁區的單元記錄在進入點地圖中。 刀1 W 口、J丹王间|;同的杲合,且 每個再生間隔視為播放項目122。電影物件130是以導航 此些導航程式根據使用者的需求開始播放 ^twl30^ ::=與遥早,且目錄表140包括所有標題與選單的開 ❹致於可再生透過制者操作(像是標題搜尋 :==置的:;與選:動目錄表14°也包括當 的開始位置資訊。 自動地再生的標題與選單 在此些項目中’麼縮編 的資料結構將配合的剪輯Av串流 施例緣示圖! W AV資料串/21G 2 2根據本發明實 的範例資料結構的示意圖。L 子式子幕串流220 。月錢圖2,為了解決上述關於點陣圖式的標題資料 200529202 16252pif.doc 的問題,根據本發明實施例文字式字幕串流220以與記錄 在儲存媒體230的剪輯AV資料串流210是以分開的方式 提供’例如多功能數位碟片(digital versatile disc,DVD)。 AV資料串流210包括視訊串流202、音訊串流204用於提 供字幕資料的播放圖形串流206與用於提供與使用者互動 的接紐與遥早的互動圖形串流208,而上述的串流在動書 主串流中多路傳輸並記錄在儲存媒體230中,其中動晝主 串流就是所熟知的音訊-視訊(audi〇_visuai,aV )資料串流。 根據本發明實施例文字式字幕資料220表示用於提供 多媒體影像的字幕與標題的資料來記錄在儲存媒體230 中,且是使用標記語言來實作,例如可擴展標記語言 (Extensible Markup Language,XML)。然而,此多媒體 影像的字幕與標題是使用二位元資料來提供,此後,使用 二位元資料提供多媒體影像的字幕與標題的文字式字幕資 料220簡單視為,,文字式字幕串流,,。用於提供字幕資料的 播放圖形串流206也提供點陣圖式字幕資料來在螢幕上顯 示字幕(或標題)。 由於文字式字幕串流220是與AV資料串流210分開 記錄,且不會與AV資料串流210多路傳輸,所以文字式 予幕串流220的大小不受限於此。因此,可以使用數種語 3提供字幕與標題,再者,文字式字幕串流22〇可以連續 地再生且有效地編輯而不會有任何困難。 之後文字式字幕串流220會轉換成點陣圖圖形影像, 並輸出在螢幕上覆蓋多媒體影像。如此轉換文字式字幕資 200529202 16252pif.doc 料為圖像式點陣圖影像的流程視為轉換(rendering)。文字 式字幕串流220包括轉換(rendering)標題文字所需的資Point maps, etc., where information about the location of random access points is recorded in the point map in units of predefined magnetic zones. Knife 1 W mouth, J Dan Wangjian |; the same combination, and each playback interval is regarded as the playback item 122. Movie object 130 is based on navigation. These navigation programs start playing according to the user's needs. ^ Twl30 ^ :: = and Yaoyao, and the directory list 140 includes all the titles and menus that are opened and operated by the producer (such as Title search: == Settings :; and options: dynamic table 14 ° also includes current starting position information. Automatically reproduced titles and menus in these items' Modified data structure will match the clip AV stream The illustration of the edge of the example! W AV data string / 21G 22 2 Schematic diagram of an example data structure according to the present invention. L sub-type sub-curtain stream 220. Moon money Figure 2, in order to solve the above header data about dot matrix 200529202 16252pif.doc, according to the embodiment of the present invention, the text subtitle stream 220 is provided separately from the clip AV data stream 210 recorded on the storage medium 230, such as a digital versatile disc (digital versatile disc, DVD). The AV data stream 210 includes a video stream 202, an audio stream 204, a playback graphics stream 206 for providing subtitle data, and a link for providing interaction with the user, and an early interactive graphics stream 208, and The above-mentioned stream is multiplexed in the mobile book main stream and recorded in the storage medium 230, wherein the dynamic day main stream is a well-known audio-video (audio_visuai, aV) data stream. According to the present invention In the embodiment, the text subtitle data 220 indicates that data for providing subtitles and titles of multimedia images is recorded in the storage medium 230 and is implemented using a markup language, such as Extensible Markup Language (XML). However The subtitles and titles of this multimedia image are provided using binary data. After that, the text subtitle data 220 that uses the binary data to provide the subtitles and titles of the multimedia image is simply regarded as, a text subtitle stream. The playback graphics stream 206 for providing subtitle data also provides bitmap subtitle data to display subtitles (or titles) on the screen. Since the text subtitle stream 220 is recorded separately from the AV data stream 210, it does not Multiple transmission with the AV data stream 210, so the size of the text-based curtain stream 220 is not limited to this. Therefore, several languages 3 can be used to provide subtitles and titles. In addition, the text subtitle stream 22 can be continuously reproduced and edited efficiently without any difficulties. After that, the text subtitle stream 220 will be converted into a bitmap graphic image and output a multimedia image overlaid on the screen. In this way, the process of converting text subtitles is 200529202 16252pif.doc. The process of converting the image to bitmap images is regarded as rendering. The text subtitle stream 220 includes the resources required to render the title text.
以下將配合圖3詳細說明包括轉換(rendering)資訊 的文字式字幕串流220。圖3是用以根據本發明實施例說 明文字式字幕串流220的資料結構的示意圖。The text subtitle stream 220 including rendering information will be described in detail below with reference to FIG. 3. FIG. 3 is a diagram illustrating a data structure of a text subtitle stream 220 according to an embodiment of the present invention.
請參照圖3,根據本發明實施例文字式字幕串流220 包括對話樣式單元(dialog style unit, DSU) 310以及數個 對話播放單元(dialog presentation units, DPU ) 320 至 340。 DSU 310與DPU 320至340也可視為對話單元。每個形成 文字式字幕串流220的對話單元310至340是以完成封包 的元件資料流(packetized elementary streams,PESs )或簡 單孰知的PES封包350的型式來儲存。同樣地,文字式字 幕串流220的PES是以傳輸封包(transport packets,TP ) 362的單元來記錄與傳送。連續的TP可視為傳輸串流 (transport stream,TS ) 〇 然而,如圖2所示,根據本發明實施例文字式字幕串 流220不會與AV資料串流210多路傳輸且會在儲存媒體 230上記錄成分開的TS。 請參照圖3,在包括在文字式字幕串流220的一個PES 封包350中會記錄一個對話單元。文字式字幕串流220包 括一個配置在前面的DSU 310與數個接在DSU 310之後的 DPU 320至340。DSU 310包括說明在顯示於螢幕的標題 中對話的輸出樣式的資訊,其中多媒體影像再生在此螢幕 14 200529202 16252pif.doc 上。其間,數個DPU 320至340包括在欲顯示的對話内容 ,上的文字資訊項目以及在各別輸出項目上的資訊。 .圖4是根據本發明實施例繪示具有圖3的資料結構的 文字式字幕串流220的示意圖。 請參照圖4,文字式字幕串流220包括一個DSU 410 與數個DPU 420。 在本發明範例實施例中,數個DPU定義成 § 和-units。然而,數個DPU不會 分別具體指定。範例的案例是使用像是 while(processed—length<end—of—life)的語法。 DSU與DPU的資料結構將配合圖5作詳細說明。圖5 是根據本發明實施例繪示圖3中的對話樣式單元的示咅 圖。 請參照圖5,在DSU 310中定義對話樣式資訊項目的 一集合dialog一stylesetO 510,在其中會集合欲顯示成標題 的對話的輸出樣式資訊項目。DSU310包括在標題中^八 % 對诂的區域的位置的資訊、轉換(rendering )對話所需2 資訊、使用者可以控制的樣式的資訊等等。詳細内容:^ 以下說明。 :在 圖6是用以根據本發明實施例說明對話樣式單一 (dialog style unit, DSU)的範例資料結構的示意圖。平凡 請參照圖6,DSU 310包括調色板集合(pai如 collection) 61〇與區域樣式集合62〇。調色板集合6丨〇 3 e 個色彩調色板的集合,其用以定義使用在標題中的顏$。 15 200529202 16252pif.doc 包括在調色板集合610中的顏色組合與顏色資訊(像是透 明度)可應用至配置於DSU之後的所有數個DPU。Referring to FIG. 3, a text subtitle stream 220 according to an embodiment of the present invention includes a dialog style unit (DSU) 310 and a plurality of dialog presentation units (DPU) 320 to 340. DSU 310 and DPU 320 to 340 can also be considered as dialogue units. Each of the dialog units 310 to 340 forming the text subtitle stream 220 is stored in a packetized elementary streams (PESs) or simply known PES packet 350 format. Similarly, the PES of the text subtitle stream 220 is recorded and transmitted in units of transport packets (TP) 362. The continuous TP can be regarded as a transport stream (TS). However, as shown in FIG. 2, according to the embodiment of the present invention, the text subtitle stream 220 will not be multiplexed with the AV data stream 210 and will be stored in the storage medium. Record the TS on 230. Referring to FIG. 3, a dialogue unit is recorded in a PES packet 350 included in the text subtitle stream 220. The text subtitle stream 220 includes a DSU 310 disposed in front and a plurality of DPUs 320 to 340 connected after the DSU 310. DSU 310 includes information describing the output style of the dialogue in the title displayed on the screen, with multimedia images reproduced on this screen 14 200529202 16252pif.doc. In the meantime, several DPUs 320 to 340 include text information items on the content of the dialog to be displayed, and information on the respective output items. 4 is a schematic diagram illustrating a text subtitle stream 220 having the data structure of FIG. 3 according to an embodiment of the present invention. Referring to FIG. 4, the text subtitle stream 220 includes a DSU 410 and a plurality of DPUs 420. In the exemplary embodiment of the present invention, several DPUs are defined as § and -units. However, several DPUs are not specified individually. The example case is using a syntax like while (processed_length < end_of_life). The data structure of the DSU and DPU will be described in detail with reference to FIG. 5. FIG. 5 is a diagram illustrating a dialog style unit in FIG. 3 according to an embodiment of the present invention. Referring to FIG. 5, a set of dialog style information items 510 is defined in the DSU 310, and the output style information items of the dialog to be displayed as a title are collected therein. The DSU310 includes information on the location of the eighty-eighth confrontational area in the title, 2 information required for rendering dialogue, information on styles that the user can control, and so on. Details: ^ The following description. : FIG. 6 is a schematic diagram illustrating an exemplary data structure of a dialog style unit (DSU) according to an embodiment of the present invention. Ordinary Please refer to FIG. 6, the DSU 310 includes a palette collection 61 and a region style collection 62. Palette set 6 It is a set of e color palettes, which are used to define the colors used in the title. 15 200529202 16252pif.doc The color combinations and color information (such as transparency) included in the palette set 610 can be applied to all DPUs configured after the DSU.
區域樣式集合(region style collection) 620是形成標 題的各別對話的輸出樣式資訊的集合。每個區域樣式包括 指示顯示在螢幕上的對話的位置的區域資訊622、指示欲 應用至每個對話文字的輸出樣式的文字樣式資訊624以及 指示樣式的使用者可改變樣式集合(user changeable style collection ) 626,其中使用者可任意改變應用至每個對話文 字的樣式。 圖7是用以根據本發明另一實施例說明對話樣式單亓 的範例資料結構的示意圖。 。月參知、圖7 ’與圖6不同的是並沒有包括調色板集合 61〇。也就是,沒有在DSU 310中定義色彩調色板集合, 但調色板集合610定義在圖12A與12B所述的DPU中。 每個區域樣式710的資料結構是相同於圖6所述的資料处 構。 、、、、° 圖8是根據本發明實施例繪示圖6或圖7中的範例 話樣式單元的示意圖。 ' 請參照圖8與圖6,DSU 310包括調色板集合86〇與 610以及數個區域樣式82〇與62〇。如上所述,調色板集^ 61〇是數個色彩調色板的集合,其用以定義使用在標題; 的顏色。包括在調色板集合610中的顏色組合與顏色資气 (像是透明度)可應用至配置於DSU之後的所有數個 DPU 〇 16 200529202 16252pif.doc 其間,每個區域樣式820與620包括區域資訊83〇與 622 ’其係指示視窗區域上的資訊,其中視窗區域中標題欲 顯示在螢幕上,且區域資訊83〇與622包括χ、γ座枳、 覓、南为景顏色等視窗區域的資訊,其中視窗區域中標 欲顯示在螢幕上。 不、A region style collection 620 is a collection of output style information for individual dialogs forming a title. Each area style includes area information 622 indicating the position of the dialog displayed on the screen, text style information 624 indicating the output style to be applied to each dialog text, and a user changeable style collection indicating the style. ) 626, where the user can arbitrarily change the style applied to each dialog text. FIG. 7 is a schematic diagram illustrating an exemplary data structure of a dialog style list according to another embodiment of the present invention. . Figure 7 'differs from Figure 6 in that it does not include the palette set 61. That is, the color palette set is not defined in the DSU 310, but the color palette set 610 is defined in the DPU described in FIGS. 12A and 12B. The data structure of each area pattern 710 is the same as the data structure described in FIG. Fig. 8 is a schematic diagram showing an exemplary speech style unit in Fig. 6 or Fig. 7 according to an embodiment of the present invention. 'Please refer to FIG. 8 and FIG. 6, the DSU 310 includes a palette set 86 and 610 and a plurality of area patterns 82 and 62. As mentioned above, the palette set ^ 61〇 is a collection of several color palettes, which are used to define the colors used in the title; The color combinations and color qualities (such as transparency) included in the palette set 610 can be applied to all DPUs configured after the DSU. 〇16 200529202 16252pif.doc, each area style 820 and 620 includes area information 830 and 622 'indicate the information on the window area, in which the title of the window area is to be displayed on the screen, and the area information 830 and 622 include information on the window area such as χ, γ, 觅, and south. , In which the window area is displayed on the screen. Do not,
同樣地,每個區域樣式820與62〇包括文字樣式資訊 840與624,其係指示欲應用至每個對話文字的輸出樣。 也就是包括對話文字欲顯示在上述視窗區域的位置的X、 Υ座標、輸出方向(例如由左至右或由上至下)、排序、行 間隔、欲參考的字體識別字、字體樣式(例如黑體或斜體^ 字體大小與字體顏色資訊等等。 再者,每個區域樣式820與620也包括使用者可改變 樣式集合850與626,其係指示使用者可任意改變的樣式。 然而,使用者可改變樣式集合850與626是非必須的。使 用者可改變樣式集合850與626可包括文字輸出樣式資訊 項目840與624之中的視窗區域位置、文字輸出位置、字 體大小行間隔等改變資訊。每個改變資訊項目可表示成在 輸出樣式840與625上相關資訊的相對增加或減少值來應 用至每個對話文字。 總而言之,有三種樣式相關資訊的型態··定義在區域 樣式820與620中的樣式資訊(region—style) 620、用來加強 標通的部分的線内樣式資訊(inline—style) 151 〇 (稍後解釋) 以及使用者可改變樣式資訊(user—changeable—style) 850, 且應用此寫資訊項目的順序如下: 17 200529202 16252pif.doc 應用定義在區域樣式中的區域樣式資訊 其間,在欲應用至每個對話文字的文字樣式資訊項目Similarly, each of the area styles 820 and 62 includes text style information 840 and 624, which are output samples indicating the text to be applied to each dialog. That is, the X, Υ coordinates, the output direction (for example, from left to right or top to bottom) of the position where the dialog text is to be displayed in the above window area, sorting, line spacing, font recognition characters to be referenced, and font style (such as Bold or italic ^ Font size and font color information, etc. Furthermore, each area style 820 and 620 also includes user-changeable style sets 850 and 626, which indicate the style that users can change arbitrarily. However, use It is not necessary for the user to change the style set 850 and 626. The user can change the style set 850 and 626 to include change information such as the position of the window area, text output position, font size, and line interval among the text output style information items 840 and 624. Each change information item can be expressed as a relative increase or decrease in the relevant information on the output patterns 840 and 625 to be applied to each dialog text. In summary, there are three types of style-related information types defined in area patterns 820 and 620 Region style information (region-style) 620, inline style information (inline-style) used to enhance the standard part 151 〇 (Later And user-changeable-style 850, and the sequence of applying this information item is as follows: 17 200529202 16252pif.doc Apply the regional style information defined in the regional style. Text style information item for dialog text
840與624之中,藉由字體(font—id) 842的識別字所參考 字體檔案資訊可定義如下。 〃、 圖9A是根據本發明實施例繪示包括藉由圖8中字㉒ 資訊842參考的數個字體集合的範例剪輯資訊檔案$川二 示意圖。 、勺Among 840 and 624, the font file information referenced by the recognition character of the font (font-id) 842 can be defined as follows. 9A is a schematic diagram showing an example clip information file $ 川 二 including several font sets referenced by the font information 842 in FIG. 8 according to an embodiment of the present invention. ,Spoon
1)基本地, 620。 内樣式資訊,則應用線内樣式資訊1510來 復式錢應用的部分,並加強標題文字的部分。 ^ )倘若有使用者可改變樣式資訊85〇,則最後應 資。K❿使用者可改變樣式資訊的呈現不是必須的。 請參照圖9A、圖8、圖2與圖1,根據本發明在 StreamCodingInfo〇 930中包括各種記錄在儲存媒體的串 流上的資訊,其中StreamCodinglnfoQ 930指的是包括在$ 輯資訊檔910與110中的串流編碼資訊結構。也就是包括 視訊串流202、音訊串流、播放圖形串流、互動圖形串流、 文字式字幕串流上的資訊。特別的是,包括欲顯示標題的 語言上的資訊(textST—language一code)932,其係關於文字式 字幕串流220。同樣地,也可定義檔案儲存字體資訊的字 體名稱936與檔案名稱938,其係對應指示欲參考與顯示 在圖8中的字體的識別字的f⑽t—id 842與934。用於找尋 欲參考與在此定義的字體的識別字的字體檔案的方法將配 合圖10作詳細說明。 18 200529202 16252pif.doc …圖9B是根據本發明另一實施例繪示包括藉由圖8的 字體資訊842參考的數個字體集合的範例剪輯資訊檔案 940的示意圖。 、/1) Basically, 620. For internal style information, use the inline style information 1510 to duplicate the part of the money application, and strengthen the part of the title text. ^) If there is a user who can change the style information 85, then the final funding. K❿ The presentation of user-changeable style information is not necessary. Please refer to FIG. 9A, FIG. 8, FIG. 2 and FIG. 1. According to the present invention, StreamCodingInfo 0930 includes various types of information recorded on the stream of the storage medium, where StreamCodinglnfoQ 930 refers to the information included in the series information files 910 and 110. Stream encoding information structure in. That is, it includes information on the video stream 202, the audio stream, the playback graphics stream, the interactive graphics stream, and the text subtitle stream. In particular, it includes language information (textST_language_code) 932 to display a title, which is about a text subtitle stream 220. Similarly, a font name 936 and a file name 938 of the font information stored in the file may be defined, which correspond to f⑽t-id 842 and 934 indicating the identification characters of the font to be referred to and displayed in FIG. 8. A method for finding a font file to be referred to with a recognition character of the font defined here will be described in detail with reference to FIG. 18 200529202 16252pif.doc ... FIG. 9B is a schematic diagram illustrating an example clip information file 940 including a plurality of font sets referenced by the font information 842 of FIG. 8 according to another embodiment of the present invention. , /
請參照圖9B,結構CHpInf〇()定義在剪輯資訊擋案91〇 與110巾。在此結構中定義了藉由圖8中字體資訊842參 1的數個字體集合。也就是,具體指明對應fontjd 842的 子體彳田案名稱952,其中f〇nt一id 842指示欲參考與顯示在 圖8的字體賴财。驗找尋欲參考與在此㈣的字體 的識別字的字體料的方法將在以下作詳細說明。 f 1〇是顯示藉由圖9A與9B中字體槽案名稱938與 952 >考_個字體檔案的位置的示意圖。 蝴上^ ^ 1G ’根據本發明實施例顯示關於記錄在多媒 二可以錄結構。特別是,由於使用目錄結構,所 譲⑽,例如儲存在輔助資料 其間,形成 f^Vllu.font1010 或 99999.fontl020。 細說明。 早元的DPU的結構將配合圖11作詳 I iyr u 疋用以根據本發明另 320白„結構的示意圖。 與顯示時間^與圖3 ’包括欲輸出對話内容的文字資訊 …”的資訊的DPU 320包括指示用;罄苴0 出對話時_時_1ί 螢幕上輸 板的調色板參考丄=〇、具體指明欲參考的色彩調色 的對扣域錢⑽。特別的是,用於欲輸出在 19 200529202 16252pif.doc 話的對話區域資訊113()包括指明欲應用至對話的輪 式的樣式參考資訊I!32與指*㈣輸出在螢幕上的二 ㈣話文字資訊1134。在此案例中,其假設由調色= 貧訊112G指示的色彩調色板集合是定義在D 夫 圖6的610)中。 多$ 其間,圖12A是用以根據本發明實施例說明圖 DPU 320的範例資料結構的示意圖。 請參照® 12A與圖3,DPU 32〇包括指示用於欲 幕上輸出對話時間的時間資訊121()、定義色彩調色板“ 的調色板集合122G以制於欲輸出在螢幕上的對話的^ 話區域資訊1230。在此案例中,調色板集合測不^ 義在如圖η中的Dsu中’但會直接岐義在DPU32(;中。 /、間圖12B疋用以根據本發明實施例說明圖3 DPU 320的範例資料結構的示意圖。 請參照圖⑽,DPU 32〇包括指示用於欲在榮幕上輪 出對話的時間的時間資訊1250、顏色更新旗標1260、當顏 色更新旗標設為1時所需的色彩調色板集合1270以及用於 欲輸出在螢幕上對話的對話區域:#訊丨。在此案例中、, 色形调色板集合1270也是定義在如圖u的DSU中,並且 儲存在D P U 3 20中。特別的是,為了表示使用連續再生的 =入/次出,除了定義在Dsu中的基本調色板外,用來表 示次入Λ炎出的调色板集合127〇會定義在Dpu 32〇中且顏 色更新旗標126G會設定為卜此將配合圖19作詳細說明。 圖13疋根據本發明實施例緣示圖η至圖UR中的 20 200529202 16252pif.doc DPU 320的示意圖。 請參照圖13、圖11、圖12A與圖12B,Dro包括對 話開始時間資訊(dialog一strat—PTS)與對話結束時間資訊 (dialog一end—PTS)1310作為指示用於欲在螢幕上輸出的對 話的時間的時間資訊1110。同樣地,對話調色板識別字 (dialog—palette一id)被包括成調色板參考資訊112〇。在圖 12A的案例中,色彩調色板集合1220可被包括取代調色板 φ 參考資訊1120。對話文字資訊(region一subtitle) 1334被包 括成對話區域資訊1230以用於欲輸出的對話,且為了指明 應用至其的輸出樣式,也會包括區域樣式識別字 (region一style一id) 1332。圖13中的範例只是DPU的實施例 且具有如圖11至圖12B所示的資料結構的0]?1;可以各種 方式修改來實作。 圖14是用以說明圖13中的對話文字資訊 (region_subtitle)的範例資料結構的示意圖。Please refer to FIG. 9B, the structure CHpInf0 () is defined in the clip information file 91 and 110. In this structure, a number of font sets defined by the font information 842 in FIG. 8 are defined. That is, the name 952 of the child body Putian corresponding to fontjd 842 is specified, in which font_id 842 indicates the font Lai Cai to be referred to and displayed in FIG. The method of searching for the typeface of the identification character to be referred to and the typeface here will be described in detail below. f 10 is a schematic diagram showing the positions of the font files by using the font slot names 938 and 952 in Figs. 9A and 9B. Butterfly ^ ^ 1G 'according to the embodiment of the present invention shows the recordable structure on the media. In particular, due to the use of a directory structure, for example, it is stored in the auxiliary data, forming f ^ Vllu.font1010 or 99999.fontl020. Detailed explanation. The structure of the early DPU will be detailed in conjunction with FIG. 11 as a schematic diagram of the structure used in accordance with the present invention, and the display time ^ and the information of FIG. 3 'including text information to be output of the dialog content ...' The DPU 320 includes instructions; when the dialogue is complete, the palette reference 输 = 0 on the screen, and the specific deduction of the color to be referenced. In particular, the dialog area information 113 () used to output the words in 19 200529202 16252pif.doc includes the wheel style reference information I! 32 and finger * ㈣ output on the screen. Text information 1134. In this case, it is assumed that the color palette set indicated by color correction = lean 112G is defined in Dff (610) of Fig. 6. In the meantime, FIG. 12A is a schematic diagram illustrating an exemplary data structure of the DPU 320 according to an embodiment of the present invention. Please refer to ® 12A and Figure 3. DPU 32〇 includes time information 121 () indicating the time for dialogue output on the screen, and a palette set 122G that defines the color palette to control the dialogue to be output on the screen. ^ Talk area information 1230. In this case, the palette set is not defined in Dsu as shown in Figure η ', but it will be directly ambiguous in DPU32 (;. Figure 12B) is used according to this The embodiment of the invention illustrates a schematic diagram of an exemplary data structure of the DPU 320. Referring to FIG. ⑽, the DPU 32〇 includes time information 1250 indicating a time for a dialogue to be rotated on the glory screen, a color update flag 1260, and a color Update the color palette set 1270 required when the flag is set to 1 and the dialogue area for outputting dialogue on the screen: # 讯 丨. In this case, the color shape palette set 1270 is also defined in The DSU in Figure u is stored in DPU 3 20. In particular, in order to indicate the use of continuous regeneration = in / time out, in addition to the basic palette defined in Dsu, it is used to indicate the second time Λ inflammation out The color palette set 127〇 will be defined in DPU 32〇 and the color update flag The standard 126G will be set, which will be described in detail with reference to FIG. 19. FIG. 13 疋 According to the embodiment of the present invention, the figure η to the figure 20 in FIG. 20 200529202 16252pif.doc DPU 320. Please refer to FIG. 13, FIG. 12A and 12B, Dro includes dialog start time information (dialog-strat-PTS) and dialog end time information (dialog-end-PTS) 1310 as time information 1110 indicating the time for the dialog to be output on the screen. Similarly, the dialog palette identifier (dialog_palette_id) is included as the palette reference information 112. In the case of FIG. 12A, the color palette set 1220 may be included instead of the palette φ reference information. 1120. Dialogue text information (region_subtitle) 1334 is included as dialogue region information 1230 for the dialog to be output, and in order to indicate the output style applied to it, a region style identifier (region_style_id) is also included. 1332. The example in FIG. 13 is only an embodiment of the DPU and has a data structure of 0]? 1 as shown in FIGS. 11 to 12B; it can be modified in various ways to implement. FIG. 14 is used to explain the dialog in FIG. 13 Text information ion_subtitle).
請參照圖14,對話文字資訊(圖n的1134、圖i2A 罨 的1234、圖12B的1284與圖I3的1334)包括線内資訊 1410與對話文字1420作為輸出樣式來加強對話的部分。 圖15是根據本發明實施例繪示圖π的對話文字資訊 1334的示意圖。如圖15所示,對話文字資訊1334是由線 内樣資訊(inline—style) 151〇 與對話文字(text—stdng) 152〇 來實作。同樣地,較佳的是指示線内樣式的結束的資訊包 括在圖15的實施例中。除非定義線内樣式的結束部分,^ 則一旦指明的線内樣式可能會接著應用在其後,其^與生 200529202 16252pif.doc 產商的所設定的相反。 其間,圖16是用以說明在連續地再生連續對話播放單 元(dialog presentation units, DPUs )的限制的示意圖。 凊參照圖16與圖13,當需要連續再生上述的數個dpu 時,則需要下列限制。 1)當對話物件開始在圖形平面(graphicplane,Gp)上 輸出時’則定義在DPU中的對話開始時間資訊 (dialog—start一PTS) 1310 會指示一時間,圖形平面(graphic plane,GP)將在以下配合圖17作詳細說明。 2)定義在Dro中的對話開始時間資訊 (dialog—start一PTS) 1310指示一時間來重置處理文字式字 幕的文字式字幕解碼器,其中文字式字幕解碼器將在以下 配合圖17作詳細說明。 3)當需要連續再生上述的數個DPU時,則目前DPU 的對話結束時間資訊(dialog—endJPTS)應該相同於下一個 連續再生的DPU的對話開始時間資訊(dialog_startJPTS)。 也就是,在圖16中,為了連續地再生DPU#2與DPU#3, 包括在DPU #2中的對話結束時間資訊應該相同於包括在 DPU #3中的對話開始時間資訊。 其間,最佳的是根據本發明DSU滿足下列限制。 1) 文字式字幕串流220包括一個DSU。 2) 包括在所有區域樣式(region一style)的數個使用者 可改變樣式資訊項目(user—control_style )應該是相同的。 其間,最佳的是根據本發明DPU滿足下列限制。 22 200529202 16252pif.doc 1)用於至少兩個標題的視窗區域應該被定義。 根據本發明實施例依據記錄在儲存媒體的文字式字幕 串流220的資料結構的範例再生裝置的結構將配合圖17 說明如下。 圖17是用以根據本發明實施例說明用於文字式字幕 串流的範例再生裝置的示意圖。 請參照圖17,再生裝置1700 (所謂的錄放裝置)包括 緩衝單元與文字式字幕解碼器1730。其中緩衝單元包括用 於儲存字體檔案的字體預載緩衝器(f0nt prdoading buffer, FPB) 1712與用於儲存文字式字幕檔案的字幕預載緩衝器 (subtitle preloading buffer,SPB) 1710,而文字式字幕解 碼态1730用以藉由圖形平面(graphics piane,〇ρ) 175〇 與色彩對照表(color look_up table, CLUT) 1760解碼與再 生事先記錄在儲存媒體的文字式字幕串流作為輸出。 特別地,字幕預載緩衝器(subtitle preloading buffer, SPB) 1710會預載文字式字幕資料串流220而字體預載緩 衝器(font preloading buffer,FPB ) 1712 會預載字體資訊。 文字式字幕解碼器1730包括文字式幕處理器1732、 對話排列緩衝器(dialog composition buffer,DCB) 1734、 對話緩衝器(dialog buffer,DB) 1736、文字式字幕轉換 (rendering)器1738、對話播放控制器1740以及點陣圖 物件緩衝器(bitmap object buffer,BOB) 1742。 文子式幕處理器1732 子幕預載緩衝器(subtitle preloading buffer,SPB) 1710中接收文字式字幕資料串流 23 200529202 16252pif.doc 220、轉換上述關於包括在DSU的資訊的樣式以及包括在 DPU的對話輸出時間資訊至對話排列緩衝器(dial〇g composition buffer, DCB) 1734 並轉換包括在 DPU 的對話 文字資訊至對話緩衝器(dialog buffer, DB) 1736。 對話播放控制器1740藉由使用關於包括在對話排列 緩衝器(dialog composition buffer,DCB ) 1734 的資訊的樣 式來控制文字式字幕轉換(rendering)器1738,且藉由使 用對話輸出時間資訊來控制用於轉換(rendering)在點陣 圖物件緩衝器(bitmap object buffer,OBO) 1742的點陣圖 衫像的時間來輸出至圖形平面(graphics piane,gp ) 1750。 根據對話播放控制器1740的控制,文字式字幕轉換 (rendering)器m8將對話文字資訊轉換(也就是執行轉 換(rendering))為點陣圖影像,其係藉由應用在字體預 載緩衝器(font prel〇ading buffer,FPB) 1712預載入字體資 訊項目之中對應儲存在對話緩衝器(dialog buffer,DB) 1736的對話文字資訊的字體資訊項目至對話文字資訊。已 轉換(rendering)點陣圖影像會儲存在點陣圖物件緩衝器 (bitmap object buffer, OBO) 1742中並根據對話播放控制 叩1740的控制輸出至圖形平面(graphics piane,Gp)p5〇。 此日守藉由參考色彩對照表(color look_up table,CLUT ) 176〇來應用指定在DSU中的顏色。 ^由生產商定義在DSU的資訊可使用成應用至對話文 f的樣式相關資訊,且也可應用由使用者預定義樣式相關 貪訊。如圖17所示的再生裝置1700會優先於由生產商定 24 200529202 16252pif.doc 義的樣式相關資訊之前應用由使用者定義的樣式資訊。Please refer to FIG. 14. The dialog text information (1134 in FIG. N, 1234 in FIG. I2A 罨, 1284 in FIG. 12B, and 1334 in FIG. I3) includes in-line information 1410 and dialog text 1420 as output styles to enhance the dialogue. FIG. 15 is a schematic diagram illustrating the dialog text information 1334 of FIG. Π according to an embodiment of the present invention. As shown in FIG. 15, the dialogue text information 1334 is implemented by inline-style information 151o and dialogue text (stdng) 152o. Likewise, it is preferable that the information indicating the end of the in-line pattern is included in the embodiment of FIG. 15. Unless the end of the in-line style is defined, once the specified in-line style may be applied subsequently, the ^ is the opposite of that set by the manufacturer 200529202 16252pif.doc. In the meantime, FIG. 16 is a schematic diagram for explaining the limitation of continuously reproducing continuous dialog presentation units (DPUs).凊 Referring to FIG. 16 and FIG. 13, when it is necessary to continuously regenerate the above several dpu, the following restrictions are required. 1) When the dialog object starts to output on the graphic plane (graphic plane), the dialog start time information (dialog_start_PTS) 1310 defined in the DPU will indicate a time, and the graphic plane (GP) will This will be described in detail below with reference to FIG. 17. 2) The dialog start time information (dialog_start_PTS) defined in Dro 1310 indicates a time to reset the text subtitle decoder that handles text subtitles. The text subtitle decoder will be described in detail below with reference to FIG. 17 Instructions. 3) When several DPUs described above need to be continuously reproduced, the dialog end time information (dialog_endJPTS) of the current DPU should be the same as the dialog start time information (dialog_startJPTS) of the next continuously reproduced DPU. That is, in FIG. 16, in order to continuously reproduce DPU # 2 and DPU # 3, the session end time information included in DPU # 2 should be the same as the session start time information included in DPU # 3. Meanwhile, it is optimal that the DSU according to the present invention satisfies the following restrictions. 1) The text subtitle stream 220 includes a DSU. 2) Several user-controllable style information items (user-control_style) included in all region styles should be the same. Meanwhile, it is optimal that the DPU according to the present invention satisfies the following restrictions. 22 200529202 16252pif.doc 1) The window area for at least two titles should be defined. The structure of an exemplary reproduction device according to the embodiment of the present invention based on the data structure of the text subtitle stream 220 recorded on the storage medium will be explained with reference to FIG. 17 as follows. FIG. 17 is a schematic diagram illustrating an exemplary reproduction apparatus for text subtitle streaming according to an embodiment of the present invention. Referring to FIG. 17, a playback device 1700 (so-called recording and playback device) includes a buffer unit and a text subtitle decoder 1730. The buffer unit includes a font preloading buffer (FPBnt) 1712 for storing font files and a subtitle preloading buffer (SPB) 1710 for storing text subtitle files, and text subtitles The decoding state 1730 is used to decode and reproduce a text subtitle stream recorded in a storage medium in advance by using a graphics plane (〇ρ) 175 and a color lookup table (CLUT) 1760 as an output. In particular, the subtitle preloading buffer (SPB) 1710 will preload the text subtitle data stream 220 and the font preload buffer (FPB) 1712 will preload the font information. The text subtitle decoder 1730 includes a text curtain processor 1732, a dialog composition buffer (DCB) 1734, a dialog buffer (DB) 1736, a text subtitle conversion (render) 1738, and a dialog playback The controller 1740 and the bitmap object buffer (BOB) 1742. Subtext curtain processor 1732 Subtitle preloading buffer (SPB) 1710 receives text subtitle data stream 23 200529202 16252pif.doc 220, converts the above-mentioned style of information included in DSU and included in DPU The dialog outputs time information to a dialog composition buffer (DCB) 1734 and converts the dialog text information included in the DPU to a dialog buffer (DB) 1736. The dialog playback controller 1740 controls the text subtitle rendering 1738 by using a style regarding the information included in the dialog composition buffer (DCB) 1734, and controls the use of the information by using dialog output time information. At the time of rendering rendering the bitmap shirt image in the bitmap object buffer (OBO) 1742, it is output to the graphics piane (gp) 1750. According to the control of the dialog playback controller 1740, the text subtitle conversion (m8) converts the dialog text information (ie, performs rendering) into a bitmap image, which is applied to the font preload buffer ( font prel0ading buffer (FPB) 1712 The font information items corresponding to the dialog text information stored in the dialog buffer (DB) 1736 among the font information items are pre-loaded to the dialog text information. The rendered bitmap image will be stored in the bitmap object buffer (OBO) 1742 and played according to the dialog playback control 播放 1740 to the graphics piane (Gp) p50. On this day, Shou applies a color specified in the DSU by referring to a color lookup table (CLUT) 176. ^ The information defined in the DSU by the manufacturer can be used as the style-related information applied to the dialogue f, and the user-defined style-related information can also be applied. The reproduction device 1700 shown in FIG. 17 applies the style information defined by the user before the style-related information defined by the manufacturer 24 200529202 16252pif.doc.
如圖8所述,由生產商定義在湖中的區域樣^資訊 (=gu>n一styie)是基本地應用成應用在對話文字的樣式相 關資訊,且倘若線内樣式資訊(inine一styie)包括在 中時’其中DPU包括應用區域樣式資訊的對話文字,則合 應用線内樣式資訊(inline—style)至對應的部分。同樣地曰, 倘若生產商額外地定義使用者可改變樣式在Dsu中且其 中-個由㈣者定A的使用者可改變樣式被時,則^ 應用區域樣式或線⑽式,然後最後應収用者可改變 訊。同,地,如目15職,較佳的是指示應用線内樣式的 結束的資訊包括在線内樣式的内容中。 冉者 、商可心明疋否可使用定義在再生裝置本 的樣式相關資訊,其係與由生產商定義並記錄 上的樣式相關資訊分開。 系體 圖18疋用以根據本發明實施例說明在範例再生裝置 1700(例如如目η所示)中文字式字幕串⑨⑽的預 程序的示意圖。 、請參照圖18,如圖2所示的文字式字幕串流22〇是定 義在上述播放清單的子路徑中。在子路徑中,可以定義支 援,種語言的數個文字式字幕串流22。。同樣地,應用至 文子式子幕的字體檔案可定義在如圖9八與9B所述的剪輯 資訊檔案910或940中。可包括在一個儲存媒體的最多255 個文子式字幕串流220可定義在每個播放清單中。同樣 地,也可定義包括在一個儲存媒體的最多255個字體檔 25 200529202 16252pif.doc 案。然而,為了保證無間斷播放,文字式字幕串流22〇的 大小應該小於或等於再生裝置1700 (例如圖17所示)的 預載緩衝器1710的大小。 圖19是用以根據本發明實施例說明在範例再生裝置 中DPU的再生程序的示意圖。 請參照圖19、圖13與圖17,顯示再生1)]?1;的流程。 播放控制器1740控制用於欲輸出在圖形平面(graphics plane,GP) 1750上的轉換(rendering)對話的時間,其係 藉由使用指定包括在DPU的對話的輸出時間131〇的對話 開始時間資訊(dialog一start一PTS )與對話結束時間資訊 (dialog 一 end 一 PTS)。此時,當完成轉換儲存在點陣圖物 件緩衝器(bitmap object buffer,BOB) 1742 的已轉換 (rendering)對話點陣圖影像至圖形平面(graphicsp][ane, GP ) 1750時,其中點陣圖物件緩衝器(bitmap 〇bject buffer, BOB) 1742包括在文字式字幕解碼器1730中,則對話開 始時間資訊會指定一時間。也就是,倘若是定義在DPU中 的對話開始時間時,則在完成轉換資訊至圖形平面 (graphics plane,GP) 1750之後建構對話所需的點陣圖資 訊會準備好被使用。同樣地,當再生DPU完成時,對話結 束時間資訊會指定一時間。此時,文字式字幕解碼器1730 與圖形平面(graphics plane,GP) 1750會被重置。最佳的 是,無論其為連續再生在文字式字幕解碼器1730的緩衝器 (像是點陣圖物件緩衝器(bitmap object buffer,BOB ) 1742)也會在DPU的開始時間與結束時間之間被重置。 26 200529202 16252pif.doc 然而,當需要數個DPU連續再生時,則文字式字幕解 碼器1730與圖形平面(graphics plane,GP ) 1750不會重置 且儲存在每個緩衝器(像是對話排列緩衝器(dialogAs shown in FIG. 8, the regional sample information (= gu &n; styie) defined by the manufacturer in the lake is basically related to the style-related information applied to the dialog text, and if the inline style information (inine-styie) ) Included in the middle time ', where the DPU includes the dialog text of the application area style information, the inline style is applied to the corresponding part. Similarly, if the manufacturer additionally defines the user-changeable style in Dsu and one of the user-changeable styles is determined by the user A, then apply the area style or line style, and then finally receive Can change the news. Similarly, as for the 15th post, it is preferable that the information indicating the end of the application of the in-line style is included in the content of the in-line style. Ran Zhe, Shang Kexin knows whether to use the style-related information defined in the reproduction device itself, which is separate from the style-related information defined and recorded by the manufacturer. FIG. 18A is a schematic diagram illustrating a pre-program of a text subtitle string in an example reproduction device 1700 (for example, as shown in item η) according to an embodiment of the present invention. Please refer to FIG. 18. The text subtitle stream 22 shown in FIG. 2 is defined in the sub-path of the above playlist. In the sub-paths, several textual subtitle streams 22 in a number of languages can be supported. . Similarly, the font file applied to the text sub-screen can be defined in the clip information file 910 or 940 as described in FIGS. 9A and 9B. A maximum of 255 subtitled subtitle streams 220 that can be included in one storage medium can be defined in each playlist. Similarly, you can define a maximum of 255 font files included in one storage medium. 25 200529202 16252pif.doc. However, to ensure uninterrupted playback, the size of the text subtitle stream 22o should be smaller than or equal to the size of the preload buffer 1710 of the playback device 1700 (for example, shown in FIG. 17). FIG. 19 is a schematic diagram for explaining a reproduction procedure of a DPU in an exemplary reproduction apparatus according to an embodiment of the present invention. Please refer to FIG. 19, FIG. 13 and FIG. 17 for the flow of reproduction 1)]? 1 ;. The playback controller 1740 controls the timing of the rendering dialogue to be output on the graphics plane (GP) 1750 by using the dialogue start time information designated by the output time of the dialogue included in the DPU 131. (Dialog-start-PTS) and dialog end time information (dialog-end-PTS). At this time, when the converted dialog bitmap image stored in bitmap object buffer (BOB) 1742 is converted to the graphics plane (graphicsp) [ane, GP) 1750, the bitmap is completed. A bitmap OBject buffer (BOB) 1742 is included in the text subtitle decoder 1730, and the session start time information will specify a time. That is, if the conversation start time is defined in the DPU, the bitmap information required to construct the conversation after completing the conversion of the information to the graphics plane (GP) 1750 will be ready for use. Similarly, when the regeneration DPU is completed, the end-of-talk time information specifies a time. At this time, the text subtitle decoder 1730 and the graphics plane (GP) 1750 are reset. Best of all, whether it is a buffer (such as a bitmap object buffer (BOB) 1742) that is continuously reproduced in the text subtitle decoder 1730 will also be between the start time and end time of the DPU. Is reset. 26 200529202 16252pif.doc However, when several DPUs need to be continuously reproduced, the text subtitle decoder 1730 and the graphics plane (GP) 1750 will not be reset and stored in each buffer (such as the dialog arrangement buffer Device
composition buffer,DCB) 1734、對話緩衝器(dialog buffer, DB)n;36 與點陣圖物件緩衝器(bitmap object buffer,OBO) 1742)中的内容會保留。也就是,當目前再生的dpu的對 話結束時間資訊與之後連續再生的DPU的對話開始時間 資訊相同時,則每緩衝器的内容會保留而不重置。 特別是,有淡入/淡出效果作為應用數個DPU的連續 再生範例。淡入/淡出效果可藉由改變點陣圖物件的色彩對 照表(color look-up table, CLUT) 1760 來實作,其中點陣 圖物件疋轉換為圖形平面(graphics piane,〇ρ) 1750。也 就是,第-DPU包括組合資訊,像是顏色、樣式與輸出時 間,且之後的連續的數個DPU具有相同於第一 Dpu的组 合資訊’但只更新色彩調色板資訊。在此案例中,藉由在 顏色資訊項目之中逐漸改變透明度(從〇%至曰 作淡入/淡出效果。 苻别疋,當使用如圖⑽所示Dpu的資料結構時, 淡入/淡出效果可有效地使用顏色更新旗標膽來實作。 料誠㈣1174G檢結確認包括在卿 中的顏色賤簡㈣是設為,,〇,_, — 要淡出效果的案例中,則會基本地使用 圖6所不的DSU中的顏色資訊。然而 檢查與確認包括在DPU中的顏色更新旗:‘ 27 200529202 16252pif.doc 為”1”時,也就是,倘若需要淡入/淡出效果時,則藉由使 用顏色資訊1270 (取代圖6所示的DSU中的顏色資訊) 來實作淡入/淡出效果。此時,藉由調整包括在DPU中的 顏色資訊1270的透明度來簡單地實作淡入/淡出效果。 在顯示淡入/淡出效果之後,最佳的是來更新色彩對照 表(color look-up table,CLUT) 1760 至包括在 DSU 中的 原始顏色資訊。這是因為除非更新色彩對照表(c〇l〇r _ l〇〇k-up table,CLUT ) 1760,否則一旦指定的顏色資訊可連 — 續地應用,而與生產商的期望相反。 圖20是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流與動晝資料同步與輸出的程序的示意 圖。 請參照圖20,包括在文字式字幕資料串流22〇的dpu 的對話開始時間資訊與對話結束時間資訊應該定義成使用 在播放清單中的全域時間軸上的時間點,以便與多媒體影 像的AV貧料串流的輸出時間同步。因此,可避免Av資 • /料串流的系統時間時鐘(system time clock, STC)與文字 式子幕資料串/爪220的對話輸出時目(dialog output time, PTS)之間的非連續。 圖21是用以根據本發明實施例說明在範例再生裝置 中輸=字式字幕串流至螢幕的程序的的示意圖。 ΰ月參知、圖21 ’其係顯示的是應用包括樣式相關資訊的 轉換(rendering)資訊21〇ι的流程、文字資 訊2140轉換 成點陣圖衫像21〇6的流程以及依據包括在組合資訊21〇8 28 200529202 16252pif.doc 中的輸出位置資訊(像是region—horizontal—p〇siti〇n與 region—vertical一position )將已轉換點陣圖影像輸出在圖形 平面(graphics plane,GP) 1750上對應位置的流程。Composition buffer (DCB) 1734, dialog buffer (dialog buffer (DB) n; 36 and bitmap object buffer (OBO) 1742) will be retained. That is, when the conversation end time information of the currently reproduced dpu is the same as the conversation start time information of the successively reproduced DPU, the content of each buffer is retained without resetting. In particular, there is a fade-in / fade-out effect as an example of continuous reproduction using several DPUs. The fade-in / fade-out effect can be implemented by changing the color look-up table (CLUT) 1760 of the bitmap object, where the bitmap object 疋 is converted to a graphics piane (0ρ) 1750. That is, the -DPU includes combination information such as color, style, and output time, and subsequent consecutive DPUs have the same combination information as the first DPU ', but only updates color palette information. In this case, by gradually changing the transparency in the color information items (from 0% to the fade-in / fade-out effect.) Don't worry, when using the Dpu data structure shown in Figure ⑽, the fade-in / fade-out effect can be changed. Effectively use color update flags to implement. Material Cheng 1174G check to confirm that the colors included in the base are simple, set to ,, 0, _, — In the case of fade out effect, the map will be basically used The color information in the DSU. However, check and confirm the color update flag included in the DPU: '27 200529202 16252pif.doc is "1", that is, if a fade-in / fade-out effect is required, it is used by The color information 1270 (instead of the color information in the DSU shown in FIG. 6) is used to implement the fade-in / fade-out effect. At this time, the fade-in / fade-out effect is simply implemented by adjusting the transparency of the color information 1270 included in the DPU. After displaying the fade-in / fade-out effect, it is best to update the color look-up table (CLUT) 1760 to the original color information included in the DSU. This is because unless the color look-up table (c0l) is updated 〇r _ l〇〇k-up table (CLUT) 1760, otherwise once the specified color information can be continuously applied—continuously applied, contrary to the manufacturer's expectations. FIG. 20 is used to illustrate the regeneration in the example according to the embodiment of the present invention. Schematic diagram of the process of synchronizing and outputting the text subtitle stream and dynamic day data in the device. Please refer to Figure 20, including the dialogue start time information and dialogue end time information of the dpu in the text subtitle data stream 22 °. The time point on the global timeline in the playlist to synchronize with the output time of the AV lean stream of the multimedia image. Therefore, the system time clock (STC) of the AV data stream can be avoided The discontinuity between the dialog output time (PTS) with the text-type sub-cursor data string / claw 220. Figure 21 is used to illustrate the input = text subtitle string in an exemplary playback device according to an embodiment of the present invention. Schematic diagram of the program that flows to the screen. Ϋ́ 月 见 知, Figure 21 'It shows the process of applying rendering information including style-related information 21〇ι, text information 2140 conversion The process of forming a bitmap shirt like 2106 and the output position information (such as region_horizontal_p0siti〇n and region_vertical_position) included in the combination information 2108 28 200529202 16252pif.doc will be The process of outputting the converted bitmap image at the corresponding position on the graphics plane (GP) 1750.
轉換(rendering)資訊2102呈現樣式資訊,像是區域 的寬、高、前景的顏色、背景顏色、字體名稱與字體大小。 如上所述,組合資訊2108指示播放的開始時間與結束時 間、視窗區域的水平與垂直位置資訊等等,其中在視窗區 域中標題輸出在圖形平面(graphics plane,GP) 1750上。 圖22是用以根據本發明實施例說明在再生裝置17〇〇 (如圖17所示)中轉換(rendering )文字式字幕資料串流 220的流程的示意圖。 請參照圖22、21與圖8 ,藉由使用 region—horizontal 一 position 、 region—vertical—position 、 region—width與region—height指定的視窗區域被指定成標 題顯示在圖形平面(graphics plane, GP) 1750上的一區域, 其中 region—horizontal—position、region—vertical一position、 region一width與region_height是用於定義在DSU的標題的 視窗區域的位置資訊830。已轉換(rendering)對話的點 陣圖影像是從藉由region—horizontal_position與 region—vertical_position所指定的開始點位置被顯示,其中 region—horizontal position 與 region—vertical—position 是視 窗區域中對話的輸出位置840。 其間,根據本發明再生裝置儲存由使用者選擇的樣式 資訊(style_id)在系統暫存區中。圖23是根據本發明實 29 200529202 16252pif.doc 施例繪示配置在範例再生裝置中用於再生文字式字 串流的範例狀態暫存器的示意圖。 貝"、 請參照圖23,狀態暫存器(播放狀態暫存哭,以 者在第12暫存器選擇的“資訊(1 k擇樣式2310)。因此’例如倘若使用者即使在再生 测(如圖17所示)執行選單呼叫或其他操作之後按下、樣 式貢訊改變按鈕,使用者之前轉的樣式資訊 參考PSR12來應用。而儲存資訊的暫存器會被9改變。日 依,上述記錄文字式字幕資料串流22〇的儲存媒體盘 再生文字式字幕貧料串流22〇的再生裝 =發明實施例再生文字式字幕資料串流咖的方法的流 社圖。 在步驟期中,從儲存媒體230 (如_ 2所示)續取 =貧訊與DPU資訊的文字式字幕資料串流⑽, ,在步驟漏中,依據包括在DSU資訊中的轉換 包括在DPU資訊中的標題文字轉換成 點陣圖衫像。在步驟2430中,根據時間資訊與位置資訊將 已轉換點陣圖影像輸出在螢幕上,其中時位 訊為包括在DPU資訊中的組合資訊。 〃彳直貝 —如上所述’本發明提供一儲存媒體,其將文字式 資料串流與影像資料分開儲存。本發明也提供裝置 與再生此文字式料資料流的方法,以致於字幕 製作與已製作字幕資料的編輯可以更容易。同時,因為不 30 200529202 16252pif.doc 限制=資料項,數目,所以可提供_語言的標題。 此外,由於字幕資料是以一個樣式 放資訊項目來形成’所以用應用至全 '二二 式可事先定義並可以各種方式改變,且 部分的線内資訊與使用者可改變樣式。 ^ 再者,藉由使用數個鄰近播放資訊 連續再生並可使用此來實作淡人/淡出效果推W的 本發明可實作成在電腦可讀記錄 係可藉由-般㈣讀取。電腦# ^触式碼’其 存電細可㈣料的記錄媒體,電腦 =了儲 儲存媒體(例如R0M、軟 、;^包括磁性 如cd-r〇m、dvd)以)先學儲存媒體(例 同日丰,雷η > 載(亦即透過網際網路傳輪), 叫,電細可讀記錄媒體可透過網 ^路傳輸}, 可以分散方式齡純行電腦可讀碼。電腦钱中並 雖然本發明已以較佳實麵揭露如上, 限定本發明’任何熟習此技蓺 …、、其並非用以 和範圍内,當可作些許之更=1錢離本發明之精神 電腦可讀媒體或資料儲存震置將以使用任何 料分開記錄。此外,立玄4+子式子幕貢料與AV資 同方式配置。再者,圖可ί圖3與圖4以不 的程式化電腦來執行圖24所述8或般目的或特定目 範圍不限於所揭露的督 a、方法。因此本發 W卜而當視後附之申請專利】 目的 護 圍 -部分或錢單1行記錢㈣置的 的,cpu可實作成具有勤=生功成的裝置。類似 的程式化電腦來執行圖 aa月或—般目的或特定 200529202 16252pif.doc 所界定者為準。 【圖式簡單說明】 圖1是用以根據本發明實施例說明記錄在儲存媒體上 的多媒體資料結構的示意圖。 圖2是根據本發明實施例繪示圖1的剪輯AV串流與 文字式字幕串流的範例資料結構的示意圖。Rendering information 2102 presents style information, such as area width, height, foreground color, background color, font name, and font size. As described above, the combination information 2108 indicates the start time and end time of playback, the horizontal and vertical position information of the window area, and the like, in which the title is output on the graphics plane (GP) 1750 in the window area. FIG. 22 is a schematic diagram illustrating a process of rendering a text subtitle data stream 220 in a playback device 1700 (as shown in FIG. 17) according to an embodiment of the present invention. Please refer to FIGS. 22, 21, and 8. By using the window area specified by region_horizontal_position, region_vertical_position, region_width, and region_height, the title is specified to be displayed in the graphics plane (GP). A region on 1750, where region_horizontal_position, region_vertical_position, region_width, and region_height are position information 830 used to define a window region in the title of the DSU. The bitmap image of the rendered dialogue is displayed from the starting point position specified by region_horizontal_position and region_vertical_position, where region_horizontal position and region_vertical_position are the output positions of the dialog in the window area 840. Meanwhile, the reproduction device according to the present invention stores the style information (style_id) selected by the user in the system temporary storage area. FIG. 23 is a schematic diagram illustrating an exemplary state register configured to reproduce a text stream in an exemplary reproduction device according to an embodiment of the present invention. "Please refer to Figure 23, the state register (playing state temporarily crying, so the information selected in the 12th register (1 k option style 2310). So 'for example, if the user (As shown in Figure 17) After performing a menu call or other operation, press the style change message button. The style information previously transferred by the user is referred to PSR12 to apply. The register that stores the information will be changed by 9. Riyi, The above-mentioned storage media disc that records the text subtitle data stream 22o reproduces the text subtitle lean material stream 2220 = an agency diagram of a method of regenerating the text subtitle data stream in the embodiment of the invention. In the step period, Continued from storage medium 230 (shown as _ 2) = text subtitle data stream of poor information and DPU information. In step omission, the title text included in DPU information is converted according to the conversion included in DSU information. Converted into a bitmap shirt image. In step 2430, the converted bitmap image is output on the screen according to the time information and location information, where the time bit information is the combined information included in the DPU information. 〃 彳 直 贝 — Such as The present invention provides a storage medium that stores text data streams separately from image data. The present invention also provides a device and a method for regenerating the text data streams, so that subtitle production and editing of produced subtitle data It can be easier. At the same time, because it is not limited to 30 200529202 16252pif.doc = data items, the number of _language titles can be provided. In addition, because the subtitle data is formed by putting information items in a style, so it is applied to all. The second type can be defined in advance and can be changed in various ways, and some in-line information and the user can change the style. ^ Furthermore, by using several adjacent playback information to continuously reproduce and use this to implement fade-in / fade-out effects The present invention can be implemented in a computer-readable recording system which can be read by-general. Computer # ^ touch code 'can be used as a recording medium for storing electricity. Computer = storage medium (such as ROM , Soft, and ^ include magnetism such as cd-r0m, dvd) to learn storage media (such as the same Rifeng, Lei η >) (namely, through the Internet transmission wheel) The recording medium can be transmitted via the Internet. The computer-readable code can be distributed in a decentralized manner. Although the present invention has been disclosed in a better manner as described above, it limits the present invention to 'anyone familiar with this technology ... ,, It is not used within the scope, when a little more can be made = 1 dollar away from the spirit of the present invention. Computer-readable media or data storage will be recorded separately using any material. In addition, Li Xuan 4+ sub-style curtain material It is configured in the same way as AV. In addition, Figures 3 and 4 can be performed by a stylized computer to perform the 8 or general purpose or specific purpose range shown in Figure 24. The scope is not limited to the disclosed methods and methods. Issue the patent and attach the attached patent] Purpose Guarding-Part or one line of the money order, the CPU can be implemented as a device with hard work. A similar stylized computer is used to execute the figure aa or-general purpose or specific 200529202 16252pif.doc as defined. [Brief description of the drawings] FIG. 1 is a schematic diagram for explaining a structure of multimedia data recorded on a storage medium according to an embodiment of the present invention. FIG. 2 is a schematic diagram illustrating an exemplary data structure of the clip AV stream and the text subtitle stream of FIG. 1 according to an embodiment of the present invention.
圖3是用以根據本發明實施例說明文字式字幕串流的 資料結構的示意圖。 圖4是根據本發明實施例繪示具有圖3的資料結構的 文字式字幕串流的不意圖。 圖5是根據本發明實施例繪示圖3中的對話樣式單元 的示意圖。 圖6是用以根據本發明實施例說明對話樣式單元的範 例資料結構的示意圖。 圖7是用以根據本發明另一實施例說明對話樣式單元 的範例資料結構的示意圖。 圖8是根據本發明實施例繪示圖6或圖7中的範例對 話樣式單元的示意圖。 圖9A與9B是根據本發明實施例繪示包括藉由字體資 訊參考的數個字體集合的範例剪輯資訊檔案的示意圖。 圖10是顯示藉由字體檔案資訊(繪示於圖9A與9B) 參考的數個字體檔案的位置的示意圖。 圖11是用以根據本發明另一實施例說明圖3中對話播 放早元的範例貢料結構的不意圖。 32 200529202 16252pif.doc 圖12A與12B是用以根據本發明另一實施例說明圖3 中對話播放單元的範例資料結構的示意圖。 圖13是根據本發明實施例繪示圖U至圖12b中的對 話播放單元的示意圖。 圖14是用以說明圖13中的對話文字資訊的範例資料 結構的不意圖。FIG. 3 is a diagram illustrating a data structure of a text subtitle stream according to an embodiment of the present invention. FIG. 4 is a schematic diagram illustrating a text subtitle stream having the data structure of FIG. 3 according to an embodiment of the present invention. FIG. 5 is a schematic diagram illustrating the dialog style unit in FIG. 3 according to an embodiment of the present invention. FIG. 6 is a schematic diagram for explaining an exemplary data structure of a dialog style unit according to an embodiment of the present invention. FIG. 7 is a schematic diagram illustrating an exemplary data structure of a dialog style unit according to another embodiment of the present invention. FIG. 8 is a schematic diagram illustrating the example conversation style unit in FIG. 6 or FIG. 7 according to an embodiment of the present invention. 9A and 9B are schematic diagrams illustrating an exemplary clip information file including a plurality of font sets referenced by font information according to an embodiment of the present invention. FIG. 10 is a schematic diagram showing the positions of several font files referred by the font file information (shown in FIGS. 9A and 9B). FIG. 11 is a schematic diagram for explaining an exemplary tribute structure of the dialog broadcast early element in FIG. 3 according to another embodiment of the present invention. 32 200529202 16252pif.doc FIGS. 12A and 12B are schematic diagrams illustrating an exemplary data structure of the dialog playback unit in FIG. 3 according to another embodiment of the present invention. FIG. 13 is a schematic diagram illustrating the dialog playing unit in FIGS. U to 12b according to an embodiment of the present invention. FIG. 14 is a schematic diagram for explaining an exemplary data structure of the dialogue text information in FIG. 13.
圖15是根據本發明實施例繪示圖13的對話文字資訊 的不意圖。 圖16是用以說明在連續地再生連續對話播放單元 (dlal〇g Presentation units,DPUs )的限制的示意圖。 圖17是用以根據本發明實施例說明用於文字式字幕 串流的範例再生裝置的 示意圖。 圖18是用以根據本發明實施例說明在範例再生裝置 中文字式字幕串流的預載入程序的示意圖。 圖D是用以根據本發明實施例說明在範例再生裝置 中對話播放單元(dialog presentation unit,DPU)的再生程 序的示意圖。 圖20是用以根據本發明實施例說明在範例再生裝置 中文子式子幕串流與動晝資料同步與輸出的程序的示意 圖。 圖21是用以根據本發明實施例說明在範例再生裝置 中輸出文字式字幕串流至螢幕的程序的的示意圖。 圖22是用以根據本發明實施例說明在範例再生裝置 中表現文字式字幕串流的程序的示意圖。 33 200529202 16252pif.doc 圖23芡根據本發明實施例繪示配置在範 中用:再24生Ϊ字式字幕串流的範例狀態暫存器的示意圖。 法的流程圖 〇FIG. 15 is a schematic diagram illustrating the dialog text information of FIG. 13 according to an embodiment of the present invention. FIG. 16 is a schematic diagram for explaining the limitation of continuous dialogue playback units (dlalog Presentation Units, DPUs). FIG. 17 is a schematic diagram illustrating an exemplary reproduction apparatus for text subtitle streaming according to an embodiment of the present invention. FIG. 18 is a schematic diagram for explaining a preloading procedure of a text subtitle stream in an exemplary reproduction apparatus according to an embodiment of the present invention. FIG. D is a schematic diagram illustrating a reproduction process of a dialog presentation unit (DPU) in an exemplary reproduction device according to an embodiment of the present invention. FIG. 20 is a schematic diagram for explaining a procedure for synchronizing and outputting Chinese sub-curtain streaming and dynamic day-to-day data in an exemplary reproduction device according to an embodiment of the present invention. FIG. 21 is a schematic diagram for explaining a procedure for outputting a text subtitle stream to a screen in an exemplary reproduction apparatus according to an embodiment of the present invention. FIG. 22 is a diagram illustrating a procedure for representing a text subtitle stream in an exemplary reproduction apparatus according to an embodiment of the present invention. 33 200529202 16252pif.doc FIG. 23 is a schematic diagram illustrating an example state register configured for use in a fan-subtitle subtitle stream according to an embodiment of the present invention. Method flowchart 〇
【主要元件符號說明】 100 多媒體資料結構 110 剪輯 112 AV資料串流 114 剪輯資訊 120 播放清單 122 播放項目 130 電影物件 140 目錄表 202 視机串流 204 音串流 206 播放圖形串流 208 互動圖形串流 210 AV資料串流 220 文字式字幕資料 230 儲存媒體 310 •對洁樣式單元(dialog style unit,DSU) 320、330、340:對話播放單元(dialog presentation units, DPU) 350 ·· PES 封包 34 200529202 16252pif.doc 362 :傳輸封包(transport packets,TP )[Description of main component symbols] 100 Multimedia data structure 110 Clip 112 AV data stream 114 Clip information 120 Playlist 122 Play item 130 Movie object 140 Table of contents 202 Video stream 204 Audio stream 206 Play graphic stream 208 Interactive graphic stream Stream 210 AV data stream 220 Text subtitle data 230 Storage media 310 • dialog style unit (DSU) 320, 330, 340: dialog presentation unit (DPU) 350 ·· PES packet 34 200529202 16252pif.doc 362: transport packets (TP)
410 : DSU410: DSU
420 : DPU 610 :調色板集合 620 :區域樣式集合 622 :區域資訊 624 :文字樣式資訊420: DPU 610: palette set 620: area style set 622: area information 624: text style information
626 :使用者可改變樣式集合 710 :區域樣式 820 :區域樣式 830 :區域資訊 840 :文字樣式資訊 850 :使用者可改變樣式集合 860 :調色板集合 910、940 :剪輯資訊檔案 1110 ··時間資訊 1120 ··調色板參考資訊 1130 :對話區域資訊 1132 :樣式參考資訊 1134 :對話文字資訊 1210 ··時間資訊 1220 :調色板集合 1230 ··對話區域資訊 1232 ··樣式參考資訊 35 200529202 16252pif.doc626: user can change style set 710: area style 820: area style 830: area information 840: text style information 850: user can change style set 860: palette set 910, 940: clip information file 1110 Information 1120 · Palette reference information 1130: Dialogue area information 1132: Style reference information 1134: Dialogue text information 1210 · Time information 1220: Palette set 1230 · Dialogue area information 1232 · Style reference information 35 200529202 16252pif .doc
對話文字資訊 時間資訊 顏色更新旗才票 色彩调色板集合 對話區域資訊 樣式參考資訊 對話文字資訊 線内樣式資訊 對話文字 再生裝置 + 幕予員載緩衝器(subtitle preloading buffer, 子體預載緩衝器(font preloading buffer, FPB ) 文字式字幕解碼器 文子式幕處理器Dialogue text information Time information Color update flag ticket color palette collection Dialogue area information style reference information Dialogue text information Line style information Dialogue text regeneration device + subtitle preloading buffer (subtitle preloading buffer) (Font preloading buffer, FPB) text subtitle decoder text sub-curtain processor
1234 : 1250 : 1260 : 1270 : 1280 : 1282 : 1284 : 1410 : 1420 : 1700 : 1710 : SPB) 1712 : 1730 : 1732 : 1734 : DCB) 對 °舌排列緩衝器(dialog composition buffer, 1736 ·對洁緩衝器(dialog buffer,DB) 1738文子式子幕轉換(rendering)器 1740 ·對話播放控制器 1742 點陣圖物件緩衝器(bitmap object buffer,BOB ) 175〇·圖形平面(graphics plane,GP) 176〇·色彩對照表(c〇i〇r i〇〇k_up table, CLUT) 361234: 1250: 1260: 1270: 1280: 1282: 1284: 1410: 1420: 1700: 1710: SPB) 1712: 1730: 1732: 1734: DCB) Dial composition buffer, 1736 Dialog buffer (DB) 1738 text sub-style subrendering (rendering) device 1740 · dialogue playback controller 1742 bitmap object buffer (BOB) 175 〇 graphics plane (GP) 176 〇 Color comparison table (c〇〇〇〇〇k_up table, CLUT) 36
Claims (1)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| KR20040013827 | 2004-02-28 | ||
| KR1020040032290A KR100727921B1 (en) | 2004-02-28 | 2004-05-07 | Storage medium, reproducing apparatus for recording text-based subtitle stream, and reproducing method thereof |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| TW200529202A true TW200529202A (en) | 2005-09-01 |
| TWI320925B TWI320925B (en) | 2010-02-21 |
Family
ID=36760967
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW094105743A TWI320925B (en) | 2004-02-28 | 2005-02-25 | Apparatus for reproducing data from a storge medium storing imige data and text-based subtitle data |
| TW098133833A TWI417873B (en) | 2004-02-28 | 2005-02-25 | Device for storing media and reproducing data from a storage medium storing image data and text subtitle data |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW098133833A TWI417873B (en) | 2004-02-28 | 2005-02-25 | Device for storing media and reproducing data from a storage medium storing image data and text subtitle data |
Country Status (9)
| Country | Link |
|---|---|
| JP (2) | JP4776614B2 (en) |
| KR (1) | KR100727921B1 (en) |
| CN (3) | CN101360251B (en) |
| AT (1) | ATE504919T1 (en) |
| DE (1) | DE602005027321D1 (en) |
| ES (1) | ES2364644T3 (en) |
| MY (1) | MY139164A (en) |
| RU (1) | RU2490730C2 (en) |
| TW (2) | TWI320925B (en) |
Families Citing this family (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| RU2358333C2 (en) | 2003-04-09 | 2009-06-10 | Эл Джи Электроникс Инк. | Recording medium with data structure for controlling playing back data of text subtitles and methods and devices for recording and playback |
| KR20050078907A (en) | 2004-02-03 | 2005-08-08 | 엘지전자 주식회사 | Method for managing and reproducing a subtitle of high density optical disc |
| WO2005091728A2 (en) | 2004-03-26 | 2005-10-06 | Lg Electronics Inc. | Recording medium, method, and apparatus for reproducing text subtitle streams |
| BRPI0509163A (en) | 2004-03-26 | 2007-09-11 | Lg Electronics Inc | recording medium, method and apparatus for reproducing and recording text subtitle streams |
| ES2336223T3 (en) | 2004-03-26 | 2010-04-09 | Lg Electronics, Inc. | RECORDING MEDIA OR RECORDING AND METHOD AND APPLIANCE TO PLAY A FLOW OR CURRENT OF TEXT SUBTITLES RECORDED IN THE RECORDING MEDIA. |
| KR100818926B1 (en) * | 2006-10-31 | 2008-04-04 | 삼성전자주식회사 | Apparatus and method for processing presentation graphics on optical discs |
| US20080159713A1 (en) * | 2006-12-28 | 2008-07-03 | Mediatek Inc. | Digital Video Recorder, Multimedia Storage Apparatus, And Method Thereof |
| CN101183524B (en) * | 2007-11-08 | 2012-10-10 | 腾讯科技(深圳)有限公司 | Lyric characters display process and system |
| US8428437B2 (en) | 2008-02-14 | 2013-04-23 | Panasonic Corporation | Reproduction device, integrated circuit, reproduction method, program, and computer-readable recording medium |
| CN110364189B (en) * | 2014-09-10 | 2021-03-23 | 松下电器(美国)知识产权公司 | Reproduction device and reproduction method |
Family Cites Families (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5294982A (en) * | 1991-12-24 | 1994-03-15 | National Captioning Institute, Inc. | Method and apparatus for providing dual language captioning of a television program |
| DE69324607T2 (en) * | 1993-08-20 | 1999-08-26 | Thomson Consumer Electronics | TELEVISION SIGNATURE SYSTEM FOR APPLICATION WITH COMPRESSED NUMERIC TELEVISION TRANSMISSION |
| AU701684B2 (en) * | 1994-12-14 | 1999-02-04 | Koninklijke Philips Electronics N.V. | Subtitling transmission system |
| US5721720A (en) * | 1994-12-28 | 1998-02-24 | Kabushiki Kaisha Toshiba | Optical recording medium recording pixel data as a compressed unit data block |
| JPH08241068A (en) * | 1995-03-03 | 1996-09-17 | Matsushita Electric Ind Co Ltd | Information recording medium, bitmap data decoding device, and bitmap data decoding method |
| JPH08275205A (en) * | 1995-04-03 | 1996-10-18 | Sony Corp | DATA ENCODING / DECODING METHOD AND DEVICE, AND ENCODED DATA RECORDING MEDIUM |
| US5848352A (en) * | 1995-04-26 | 1998-12-08 | Wink Communications, Inc. | Compact graphical interactive information system |
| JP3484838B2 (en) * | 1995-09-22 | 2004-01-06 | ソニー株式会社 | Recording method and playback device |
| CN1104725C (en) * | 1995-11-24 | 2003-04-02 | 株式会社东芝 | Multi-language recording medium and reproducing device for same |
| JPH10210504A (en) * | 1997-01-17 | 1998-08-07 | Toshiba Corp | Sub-picture color palette setting system |
| JPH10271439A (en) * | 1997-03-25 | 1998-10-09 | Toshiba Corp | Moving image display system and moving image data recording method |
| US6288990B1 (en) * | 1997-10-21 | 2001-09-11 | Sony Corporation | Reproducing apparatus, recording apparatus, and recording medium |
| JPH11196386A (en) * | 1997-10-30 | 1999-07-21 | Toshiba Corp | Computer system and closed caption display method |
| JP3377176B2 (en) * | 1997-11-28 | 2003-02-17 | 日本ビクター株式会社 | Audio disc and decoding device |
| KR100327211B1 (en) * | 1998-05-29 | 2002-05-09 | 윤종용 | Sub-picture encoding method and apparatus |
| JP2000023082A (en) * | 1998-06-29 | 2000-01-21 | Toshiba Corp | Information recording / reproducing device for multiplex television broadcasting |
| JP2002056650A (en) * | 2000-08-15 | 2002-02-22 | Pioneer Electronic Corp | Information recording apparatus, information recording method, and information recording medium on which recording control program is recorded |
| JP4467737B2 (en) * | 2000-08-16 | 2010-05-26 | パイオニア株式会社 | Information recording apparatus, information recording method, and information recording medium on which recording control program is recorded |
| JP4021264B2 (en) * | 2002-07-11 | 2007-12-12 | 株式会社ケンウッド | Playback device |
| KR100939711B1 (en) * | 2002-12-12 | 2010-02-01 | 엘지전자 주식회사 | Text-based subtitle playback device and method |
| KR100930349B1 (en) * | 2003-01-20 | 2009-12-08 | 엘지전자 주식회사 | Subtitle data management method of high density optical disc |
| RU2358333C2 (en) * | 2003-04-09 | 2009-06-10 | Эл Джи Электроникс Инк. | Recording medium with data structure for controlling playing back data of text subtitles and methods and devices for recording and playback |
| KR20050078907A (en) * | 2004-02-03 | 2005-08-08 | 엘지전자 주식회사 | Method for managing and reproducing a subtitle of high density optical disc |
| WO2005074400A2 (en) * | 2004-02-10 | 2005-08-18 | Lg Electronics Inc. | Recording medium and method and apparatus for decoding text subtitle streams |
| CN100473133C (en) * | 2004-02-10 | 2009-03-25 | Lg电子株式会社 | Method for reproducing text subtitle and text subtitle decoding system |
| KR100739680B1 (en) * | 2004-02-21 | 2007-07-13 | 삼성전자주식회사 | A storage medium, a reproducing apparatus, and a reproducing method, recording a text-based subtitle including style information |
| ES2336223T3 (en) * | 2004-03-26 | 2010-04-09 | Lg Electronics, Inc. | RECORDING MEDIA OR RECORDING AND METHOD AND APPLIANCE TO PLAY A FLOW OR CURRENT OF TEXT SUBTITLES RECORDED IN THE RECORDING MEDIA. |
-
2004
- 2004-05-07 KR KR1020040032290A patent/KR100727921B1/en not_active Expired - Lifetime
-
2005
- 2005-02-25 TW TW094105743A patent/TWI320925B/en not_active IP Right Cessation
- 2005-02-25 TW TW098133833A patent/TWI417873B/en not_active IP Right Cessation
- 2005-02-28 ES ES05726932T patent/ES2364644T3/en not_active Expired - Lifetime
- 2005-02-28 RU RU2007146766/28A patent/RU2490730C2/en active
- 2005-02-28 CN CN200810135887XA patent/CN101360251B/en not_active Expired - Lifetime
- 2005-02-28 CN CN2007101089243A patent/CN101059984B/en not_active Expired - Lifetime
- 2005-02-28 DE DE602005027321T patent/DE602005027321D1/en not_active Expired - Lifetime
- 2005-02-28 JP JP2007500690A patent/JP4776614B2/en not_active Expired - Lifetime
- 2005-02-28 MY MYPI20050802A patent/MY139164A/en unknown
- 2005-02-28 AT AT05726932T patent/ATE504919T1/en not_active IP Right Cessation
- 2005-02-28 CN CNB2005800003070A patent/CN100479047C/en not_active Expired - Lifetime
-
2010
- 2010-09-22 JP JP2010211755A patent/JP5307099B2/en not_active Expired - Fee Related
Also Published As
| Publication number | Publication date |
|---|---|
| DE602005027321D1 (en) | 2011-05-19 |
| JP2011035922A (en) | 2011-02-17 |
| RU2007146766A (en) | 2009-06-20 |
| TW201009820A (en) | 2010-03-01 |
| MY139164A (en) | 2009-08-28 |
| KR20050088035A (en) | 2005-09-01 |
| CN101360251B (en) | 2011-02-16 |
| JP5307099B2 (en) | 2013-10-02 |
| JP4776614B2 (en) | 2011-09-21 |
| TWI417873B (en) | 2013-12-01 |
| JP2007525904A (en) | 2007-09-06 |
| CN101059984B (en) | 2010-08-18 |
| TWI320925B (en) | 2010-02-21 |
| ATE504919T1 (en) | 2011-04-15 |
| RU2490730C2 (en) | 2013-08-20 |
| CN1774759A (en) | 2006-05-17 |
| HK1126605A1 (en) | 2009-09-04 |
| KR100727921B1 (en) | 2007-06-13 |
| HK1116588A1 (en) | 2008-12-24 |
| ES2364644T3 (en) | 2011-09-08 |
| CN100479047C (en) | 2009-04-15 |
| CN101059984A (en) | 2007-10-24 |
| HK1088434A1 (en) | 2006-11-03 |
| CN101360251A (en) | 2009-02-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8437612B2 (en) | Storage medium recording text-based subtitle stream, reproducing apparatus and reproducing method for reproducing text-based subtitle stream recorded on the storage medium | |
| CA2541790C (en) | Storage medium storing text-based subtitle data including style information, and apparatus and method of playing back the storage medium | |
| US8195036B2 (en) | Storage medium for storing text-based subtitle data including style information, and reproducing apparatus and method for reproducing text-based subtitle data including style information | |
| US8275814B2 (en) | Method and apparatus for encoding/decoding signal | |
| US20070172199A1 (en) | Reproduction device, reproduction method, program storage medium, and program | |
| JP5307099B2 (en) | Recording medium and device for reproducing data from recording medium | |
| RU2375767C2 (en) | Data carrier with data structure for managing display of text subtitles and method and device for recording and displaying | |
| US7965924B2 (en) | Storage medium for recording subtitle information based on text corresponding to AV data having multiple playback routes, reproducing apparatus and method therefor | |
| TWI269271B (en) | DVD playback system capable of displaying multiple sentences and its subtitle generation method | |
| WO2005076605A1 (en) | Method for reproducing text subtitle and text subtitle decoding system | |
| Kennedy | Is Kawaii metal? Exploring aidoru/metal fusion through the lyrics of Babymetal | |
| Soliño | Netflix'Spain: Critical Perspectives ed. by Jorge González del Pozo and Xosé Pereira Boán | |
| HK1088434B (en) | Storage medium recording text-based subtitle stream, apparatus and method reproducing thereof | |
| HK1126605B (en) | Storage medium recording text-based subtitle stream, apparatus and method reproducing thereof | |
| HK1116588B (en) | Method for reproducing storage medium recording text-based subtitle stream | |
| TW200532658A (en) | Method for reproducing text subtitle and text subtitle decoding system | |
| Weitz | Videorecordings Cataloging Workshop | |
| KR20130094940A (en) | Foreign language learning method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| MK4A | Expiration of patent term of an invention patent |