WO2022253157A1 - 音频分享方法、装置、设备及介质 - Google Patents
音频分享方法、装置、设备及介质 Download PDFInfo
- Publication number
- WO2022253157A1 WO2022253157A1 PCT/CN2022/095845 CN2022095845W WO2022253157A1 WO 2022253157 A1 WO2022253157 A1 WO 2022253157A1 CN 2022095845 W CN2022095845 W CN 2022095845W WO 2022253157 A1 WO2022253157 A1 WO 2022253157A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- audio
- video
- sharing
- interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—Two-dimensional [2D] image generation
- G06T11/60—Creating or editing images; Combining images with text
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/435—Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/27—Server based end-user applications
- H04N21/274—Storing end-user multimedia data in response to end-user request, e.g. network recorder
- H04N21/2743—Video hosting of uploaded data from client
Definitions
- the present disclosure relates to the technical field of audio processing, and in particular to an audio sharing method, device, device and medium.
- the present disclosure provides an audio sharing method, device, equipment and medium.
- an audio sharing method including:
- the target object is displayed in the target interaction interface, the target object includes the original video and/or the sharing control of the target audio with the target audio to be shared as the background music, and the target audio is the published audio;
- the preset playback interface is displayed, and the target video with the target audio as the background music is displayed in the preset playback interface.
- the target video is used to share the target audio, and the target video includes Generated visualizations.
- an audio sharing device including:
- the first display unit is configured to display the target object in the target interactive interface, the target object includes the original video and/or the sharing control of the target audio with the target audio to be shared as the background music, and the target audio is the published audio;
- the second display unit is configured to display a preset playback interface when the first trigger operation on the target object is detected, and a target video with target audio as background music is displayed in the preset playback interface, and the target video is used to share the target audio , the target video includes visual material generated according to the target audio.
- an electronic device including:
- the processor is used to read executable instructions from the memory, and execute the executable instructions to implement the audio sharing method described in the first aspect.
- the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the audio sharing method described in the first aspect.
- the present disclosure provides a computer program, the computer program includes instructions, and when the instructions are executed by a processor, the audio sharing method described in the first aspect is implemented.
- the present disclosure provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the audio sharing method described in the first aspect is implemented.
- the audio sharing method, device, equipment, and medium of the embodiments of the present disclosure can directly display the preset playback interface when the user triggers the audio sharing of the target audio, and the preset playback interface can display the following automatically generated according to the target audio
- the target audio is the target video with background music. Using the target video to share the target audio can not only enrich the shared content and meet the individual needs of users, but also reduce the threshold of video production, so that users do not need to shoot or upload videos. Audio sharing with video content can also be easily realized.
- FIG. 1 is a schematic flowchart of an audio sharing method provided by an embodiment of the present disclosure
- FIG. 2 is an interactive schematic diagram of triggering audio sharing provided by an embodiment of the present disclosure
- FIG. 3 is another schematic diagram of an interaction triggering audio sharing provided by an embodiment of the present disclosure
- FIG. 4 is a schematic interface diagram of a preset playback interface provided by an embodiment of the present disclosure.
- FIG. 5 is an interface schematic diagram of another preset playback interface provided by an embodiment of the present disclosure.
- FIG. 6 is an interface schematic diagram of another preset playback interface provided by an embodiment of the present disclosure.
- FIG. 7 is an interactive schematic diagram of a video capture provided by an embodiment of the present disclosure.
- FIG. 8 is an interactive schematic diagram of material modification provided by an embodiment of the present disclosure.
- FIG. 9 is an interactive schematic diagram of another material modification provided by an embodiment of the present disclosure.
- FIG. 10 is an interactive schematic diagram of another material modification provided by an embodiment of the present disclosure.
- FIG. 11 is a schematic flowchart of another audio sharing method provided by an embodiment of the present disclosure.
- FIG. 12 is a schematic diagram of a playback interface of a target video provided by an embodiment of the present disclosure.
- FIG. 13 is a schematic structural diagram of an audio sharing device provided by an embodiment of the present disclosure.
- FIG. 14 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the term “comprise” and its variations are open-ended, ie “including but not limited to”.
- the term “based on” is “based at least in part on”.
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
- the embodiments of the present disclosure provide an audio sharing method, device, device and medium capable of intelligently generating videos to share music.
- the audio sharing method provided by the embodiment of the present disclosure will first be described below with reference to FIGS. 1-12 .
- the audio sharing method may be executed by an electronic device.
- the electronic equipment may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as vehicle navigation terminal) , wearable devices, etc., and fixed terminals such as digital TVs, desktop computers, smart home devices, etc.
- Fig. 1 shows a schematic flowchart of an audio sharing method provided by an embodiment of the present disclosure.
- the audio sharing method may include the following steps.
- the target interaction interface can be an interface for information exchange between the user and the electronic device.
- the target interaction interface can provide information to the user such as displaying the target object, and can also receive information input by the user or operate such as receiving the user's target object. operation.
- the target audio may be recorded audio recorded by the user who posted the original video, and the target audio may also be song audio, which is not limited here.
- the target object may include an original video with the target audio to be shared as background music.
- the original video may be a public video that the user has distributed to other users through the server or a private video that the user has published and saved in the server.
- the original video can also be a public video watched by the user and distributed to it by other users through the server.
- the target object may also include a share control of the target audio.
- the sharing control of the target audio may be a control for triggering the sharing of the target audio.
- the control may be an object that can be triggered by a user, such as a button or an icon.
- a preset playback interface is displayed, and the target video with the target audio as the background music is displayed in the preset playback interface.
- the target video is used to share the target audio, and the target video includes The visualization material generated by the target audio.
- a first trigger operation on the target object may be input to the electronic device.
- the first trigger operation may be an operation for triggering the sharing of the target audio involved in the target object.
- the electronic device After the electronic device detects the first trigger operation on the target object, it can display a preset playback interface including the target video with the target audio as the background music. Wherein, since the background music of the target video is the target audio, the target video is used to share the target audio, that is, the user can share the target audio by sharing the target video.
- the target video can be automatically generated according to the target audio.
- the target video may include visual material automatically generated according to the target audio.
- the visual material is a video element that can be watched in the target video.
- the visual material may include an image and/or text generated according to the associated information of the target audio, which will be described in detail later.
- the preset playback interface when the user triggers the audio sharing of the target audio, the preset playback interface can be displayed directly, and the target video with the target audio as background music automatically generated according to the target audio can be displayed in the preset playback interface , using the target video to share the target audio can not only make the shared content rich and colorful, meet the individual needs of users, but also lower the threshold of video production, so that users do not need to shoot or upload videos, but also conveniently realize video content Make audio sharing.
- multiple trigger modes for sharing the target audio may be provided for the user.
- the target interaction interface may include an audio presentation interface
- the target object may include a share control of the target audio
- the audio display interface may be an interface for displaying related information of the target audio, for example, a music details page interface of the target audio.
- the associated information of the target audio may include the audio cover of the target audio, the avatar of the publisher of the target audio, the audio style information of the target audio, the lyrics of the target audio, the name of the performer of the target audio, the audio name and the background of the target audio
- the music is at least one of the video covers of the published public video of the target audio.
- a target object may be displayed in the audio presentation interface of the target audio, and at this time, the target object may be a sharing control of the target audio.
- the sharing control of the target audio may be a share button for the target audio or a share icon for the target audio.
- the first trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), a voice control operation, or an expression control operation on the share control of the target audio.
- a gesture control operation such as click, long press, double-click, etc.
- a voice control operation such as click, long press, double-click, etc.
- an expression control operation on the share control of the target audio such as click, long press, double-click, etc.
- Fig. 2 shows a schematic diagram of an interaction for triggering audio sharing provided by an embodiment of the present disclosure.
- the electronic device 201 displays a music details page interface 202 of the song "Hula XXX” sung by Little A, and the music details page interface 202 displays the audio cover 203, audio The video cover 204 of the published public video with the name “Hula XXX", the performer's name “Little A”, and "Hula XXX” as the background music.
- a "share” button 205 is also displayed in the music details page interface 202, which is the share button of "Hula XXX”. The user can click the "Share” button 205 in the music details page interface 202 to share the song "Hula XXX".
- the electronic device 201 can automatically generate a video with "Hula XXX" as the background music.
- the audio sharing method may further include:
- S110 may specifically include:
- the audio display interface When the second trigger operation on the audio display control is detected, the audio display interface is displayed, and the audio display interface displays the sharing control of the target audio.
- the electronic device can play the original video in the video playback interface, and display the audio presentation control of the target audio as the background music of the original video in the video playback interface, and when the user is interested in the target audio, he can input the The second trigger action for the audio presentation control of the target audio.
- the second triggering operation may be an operation for triggering entry into an audio presentation interface of the target audio.
- the electronic device detects the second trigger operation on the audio display control, it can display the audio display interface including the sharing control of the target audio, so that the user can input the first trigger operation on the sharing control of the target audio in the audio display interface , to trigger sharing of the target audio.
- the audio display control can be used to display at least one of the lyrics of the target audio, the performer name of the target audio and the audio name of the target audio.
- the target interaction interface may include a video playback interface
- the target object may include an original video
- the target object may include an original video with target audio as background music
- the video playing interface may be an interface for playing the original video.
- the target object may be displayed in the video playback interface of the original video, and at this time, the target object may include the original video.
- the first trigger operation may include a trigger operation on the original video.
- the first trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), voice control operation or expression control operation on the original video to trigger a pop-up display function pop-up window
- Trigger operations and gesture control operations such as click, long press, double-click, etc.
- voice control operations or expression control operations on the share button in the function pop-up window are used to trigger sharing trigger operations for sharing the target audio. That is, the user needs to first trigger the display of the function pop-up window in the video playback interface, and then trigger the sharing of the target audio based on the share button in the function pop-up window.
- the first trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), a voice control operation, or an expression control operation on the original video to trigger sharing of the target audio. operate. That is, the user can directly trigger the sharing of the target audio in the video playback interface.
- a gesture control operation such as click, long press, double-click, etc.
- a voice control operation or an expression control operation on the original video to trigger sharing of the target audio. operate. That is, the user can directly trigger the sharing of the target audio in the video playback interface.
- the target interaction interface may include a video playback interface
- the target object may include an original video
- the original video may include an audio control of the target audio, such as an audio sticker.
- the video playing interface may be an interface for playing the original video.
- the target object can be displayed in the video playback interface of the original video, at this time, the target object can include the original video, and an audio sticker of the target audio can be displayed on the video screen of the original video played in the video playback interface .
- the audio sticker can be used to display at least one of the lyrics of the target audio, the performer name of the target audio, and the audio name of the target audio.
- the first triggering operation may include a triggering operation on an audio sticker.
- the first trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), voice control operation or expression control operation on the audio sticker to trigger a pop-up display function pop-up window
- Trigger operations and gesture control operations such as click, long press, double-click, etc.
- voice control operations or expression control operations on the share button in the function pop-up window are used to trigger sharing trigger operations for sharing the target audio. That is, the user needs to first trigger the display of the function pop-up window in the video playback interface, and then trigger the sharing of the target audio based on the share button in the function pop-up window.
- Fig. 3 shows another schematic diagram of interaction for triggering audio sharing provided by an embodiment of the present disclosure.
- the electronic device 301 displays a video playback interface 302 of a video released by Xiao B, and the background music of the video is the song "Hula Dance XXX".
- the electronic device 301 may display a function pop-up window 304, and a "share” button may be displayed in the function pop-up window 304, and the "share” button is a share button of "Hula XXX”. Users can click the "Share” button to share the song "Hula XXX”.
- the electronic device 301 can automatically generate a video with "Hula Dance XXX" as the background music.
- the first trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), voice control operation, or expression control operation on the audio sticker to trigger sharing of the target audio. operate. That is, the user can directly trigger the sharing of the target audio in the video playback interface.
- a gesture control operation such as click, long press, double-click, etc.
- the target interaction interface may include a video playback interface
- the target object may include sharing controls for the original video and the target audio.
- the video playing interface may be an interface for playing the original video.
- the original video and the target object can be displayed in the video playback interface of the original video, at this time, the target object can include the sharing control of the target audio, and the sharing control of the target audio can be displayed at any position in the video playback interface.
- the first trigger operation may include a trigger operation on a sharing control.
- the first trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), a voice control operation or an expression control operation on the sharing control of the target audio, which is used to trigger sharing of the target audio. . That is, the user can directly trigger the sharing of the target audio in the video playback interface.
- a gesture control operation such as click, long press, double-click, etc.
- a voice control operation or an expression control operation on the sharing control of the target audio, which is used to trigger sharing of the target audio.
- the visualized material may include an image and/or text generated according to the associated information of the target audio.
- the associated information of the target audio may include the audio cover of the target audio, the publisher avatar of the target audio, the audio style information of the target audio, the audio content information of the target audio, the lyrics of the target audio, the name of the performer of the target audio, the target audio At least one of the audio name of the target audio and the musical properties of the target audio.
- the visual material may include a first visual material.
- the first visual material may include an image generated according to associated information
- the associated information may include an associated image of the target audio
- the associated image may include an audio cover and At least one of the publisher avatars.
- the associated image may include an audio cover.
- the associated image may include the avatar of the publisher.
- the first visual material may include at least one of the following:
- the background material includes an image generated according to the image features of the associated image.
- the background material is a dynamic or static background image having image characteristics of an associated image.
- the image features may include at least one of color features, brightness features and saturation features.
- the electronic device may first extract the color with the most pixels in the associated image of the target audio or the number of pixels that appear is greater than the preset number. Set the color of the quantity threshold, and then select the color that falls in the preset color gamut among the extracted colors, and generate a solid color or gradient background image corresponding to the background material according to the selected color, and then generate the target video based on the background material .
- preset quantity threshold and the preset color gamut can be set according to user needs, and are not limited here.
- the electronic device after the electronic device detects the first trigger operation on the target object, if after performing saturation detection on the associated image of the target audio, it is determined that only Including the Morandi color system, that is, the associated images are gray colors with low saturation, the electronic device can generate a background image of a solid color or a gradient color corresponding to the background material based on the color of the Morandi color system, and then based on the background material to generate the target video.
- the background material is displayed in the first frame area of the target video.
- the first frame area of the target video may be at least part of the frame area of the target video, which is not limited here.
- the background material may be a background image covering the entire frame area of the target video.
- the background material may be a background image covering the part of the picture area of the target video.
- the foreground material includes an associated image or an image generated according to image features of an associated image.
- the foreground material is a dynamic or static foreground image located on the background material.
- the foreground material may be an associated image of the target audio.
- the electronic device may directly use the associated image of the target audio as the foreground image corresponding to the foreground material, and then generate the target video based on the foreground material.
- the foreground material may be a foreground image generated according to image features of an associated image of the target audio.
- the image features may include at least one of color features, brightness features and saturation features.
- the electronic device may first extract the color with the most pixels in the associated image of the target audio or the number of pixels that appear is greater than the preset number. Set the color of the quantity threshold, and then select the color that falls in the preset color gamut among the extracted colors, and generate a gradient foreground image corresponding to the foreground material according to the selected color, and then generate the target video based on the foreground material.
- preset quantity threshold and the preset color gamut can be set according to user needs, and are not limited here.
- the foreground material is displayed in the second picture area of the target video.
- the second picture area of the target video may be at least part of the picture area of the target video, which is not limited here.
- the foreground material may be a foreground image covering the entire picture area of the target video.
- the foreground material may be a foreground image covering the part of the picture area of the target video.
- the electronic device may generate the target video solely according to one of the foreground material or the background material, or may generate the target video according to the foreground material and the background material.
- the first picture area can be the background display area of the target video, and at least part of the second picture area can be included in the first picture area, so that the target video contains The canvas content formed by the foreground image corresponding to the material.
- At least part of the second picture area can be included in the first picture area, so that at least part of the foreground material can cover the background material, so that the target video contains the foreground material.
- Canvas content composed of the foreground image corresponding to the material and the background image corresponding to the background material.
- Fig. 4 shows a schematic diagram of a preset playback interface provided by an embodiment of the present disclosure.
- the electronic device 401 displays a preset playing interface, in which a target video automatically generated based on the song "Hula XXX” may be displayed.
- the target video shows a foreground picture and a background picture
- the foreground picture is the song cover 402 of the song "Hula XXX”
- the background picture is the solid color background image 403 generated according to the color feature of the song cover 402.
- the visualized material may include a second visualized material, and the second visualized material may include a background image selected in the material library and matched with the associated information.
- the associated information may include an audio tag of the target audio.
- the second visual material may be an image matching the audio tag of the target audio.
- the second visual material may include a static or dynamic background image selected in the material library that matches the audio tag of the target audio.
- the audio tag of the target audio can be obtained, the audio tag can be a tag used to characterize the audio style, audio type, and cover type, and then in the material library Among the pre-stored images, search for an image with the audio tag, and then randomly select at least one of the found images as the background image corresponding to the second visualization material, and then generate a target video based on the second visualization material, so that the target video
- the background image corresponding to the second visualization material matches the atmosphere, type, emotion, and other styles of the target audio and the cover image of the target audio.
- the audio tags of the target audio may include pre-marked tags, and the audio tags of the target audio may also include tags obtained by detecting the audio content of the target audio and/or the cover image of the target audio.
- the label detected according to the audio content is used to represent the audio type or audio style
- the label detected according to the cover image of the target audio is used to represent the cover style.
- the second visualization material may be displayed in the first frame area of the target video, and the first frame area has been described above, and details are not repeated here.
- Fig. 5 shows a schematic interface diagram of another preset playback interface provided by an embodiment of the present disclosure.
- the electronic device 501 displays a preset playing interface, and a target video automatically generated based on the song "Hula XXX” may be displayed in the preset playing interface.
- a background image 502 matching the song style of the song "Hula Dance XXX” is displayed in the target video.
- the visualized material may include a third visualized material, and the third visualized material may have an animation effect generated according to the associated information.
- the associated information may include the musical characteristics of the target audio.
- the music theory characteristics may include characteristics related to music theory such as rhythm characteristics, drum characteristics, tone characteristics, and timbre characteristics of the target audio.
- the animation effect may include a dynamic element shape, a dynamic element color, a dynamic element transformation mode, a background color, etc. that conform to the musical characteristics of the target audio.
- the electronic device may first detect the musical characteristics of the target audio, and then input the detected musical characteristics of the target audio into the pre-trained animation effect generation model, and then A background image corresponding to the third visualization material is obtained, and the background image corresponding to the third visualization material may include an animation effect generated according to the characteristics of music theory.
- the third visualized material may be displayed in the first frame area of the target video, and the first frame area has been described above, so details are not repeated here.
- FIG. 6 shows a schematic interface diagram of another preset playback interface provided by an embodiment of the present disclosure.
- the electronic device 601 displays a preset playing interface, in which a target video automatically generated based on the song "Hula XXX” may be displayed.
- the target video displays a background dynamic special effect 602 that matches the musical characteristics of the song "Hula XXX”.
- the electronic device may first perform timbre detection and rhythm detection on the target audio, obtain the timbre characteristics and rhythm characteristics of the target audio, and determine the musical instrument corresponding to the timbre characteristics, and then input the rhythm characteristics
- the pre-trained animation effect generation model corresponding to the musical instrument is used to obtain the background image corresponding to the third visualization material.
- the background image corresponding to the third visualization material may include the animation effect of the musical instrument playing based on the rhythm feature.
- the animation effect generated according to the characteristics of the music theory can also be the piano being played, and the rhythm of the keys of the piano is consistent with the rhythm of the target audio. same.
- the visualized material may include a fourth visualized material, and the fourth visualized material may include associated information.
- the associated information may include audio text of the target audio
- the audio text may include at least one of first text information associated with the target audio and second text information obtained by performing speech recognition on the target audio.
- the fourth visual material may include audio text displayed in a sticker form.
- the first text information may include at least one of a performer name and an audio name.
- the electronic device After the electronic device detects the first trigger operation on the target object, it can obtain the performer name and audio name, and generate a fourth visual material corresponding to the performer name and audio name. Lyric stickers, and then generate a target video based on the fourth visual material.
- the first text information may further include lyrics.
- the electronic device After the electronic device detects the first trigger operation on the target object, it can directly obtain the lyrics, performer name and audio name, and generate the second Lyric stickers corresponding to the four visual materials, and then generate a target video based on the fourth visual material.
- the second text information may include the lyrics obtained by performing speech recognition on the audio content of the target audio
- the target audio does not carry lyrics
- the name of the performer and the audio name can be obtained, and the lyrics of the target audio can be automatically identified by using the audio-to-text conversion technology, and then According to the lyrics, performer name and audio name, a lyrics sticker corresponding to the fourth visual material is generated, and then a target video is generated based on the fourth visual material.
- an audio sticker 404 may also be displayed in the target video.
- an audio sticker 503 may also be displayed in the target video.
- the visualized material may also include a plurality of sequence images sequentially displayed at preset time intervals;
- the preset time interval may be determined according to the audio duration of the target audio and the number of sequence images.
- the electronic device after the electronic device selects multiple sequence images, it can divide the audio duration of the target audio by the number of sequence images to obtain a preset time interval between two sequence images, and then generate the target video based on the multiple sequence images, In this way, in the target video, the sequence images can be displayed sequentially, and there can be the preset time interval between every two sequence images.
- the sequence image may be a plurality of background materials, a plurality of foreground materials, a plurality of second visualization materials or a plurality of third visualization materials, which is not limited here.
- the preset time interval may be determined according to the audio rhythm of the target audio.
- the electronic device can determine the number of stuck points according to the number of sequence images, and then select the number of rhythm reshoots according to the audio rhythm of the target audio, and then retake every two rhythms
- the time intervals between are respectively used as the preset time interval, and then the target video is generated based on multiple sequence images, so that in the target video, the sequence images can be displayed sequentially in sequence, and there can be this preset between every two sequence images time interval.
- the sequence image may be a plurality of background materials, a plurality of foreground materials, a plurality of second visualization materials or a plurality of third visualization materials, which is not limited here.
- the preset playback interface can also be used to edit the target video.
- the preset playback interface may be a video editing interface.
- the audio sharing method may further include:
- the video segment selected by the video interception operation in the target video is used as the intercepted target video.
- the video capture operation may include a trigger operation of a video capture mode, a selection operation of a video clip, and a confirmation operation of a selection result.
- the trigger operation for the video capture mode may include a gesture control operation (such as click, long press, double-click, etc.), a voice control operation, or an expression control operation on the video capture control for triggering entry into the video capture mode.
- the selection operation to the video clip includes a gesture control operation (such as dragging, clicking, etc.) to at least one of the start time node and the end time node on the duration selection control for selecting the start time and end time of the video clip, Voice control operation or expression control operation.
- the confirmation operation for the selection result may include a gesture control operation (such as clicking, long pressing, double-clicking, etc.), a voice control operation or an expression control operation for triggering a confirmation control for capturing a video.
- a drop-down button 409 may be displayed in the preset playback interface, and the drop-down button 409 may be used to display function buttons that are not currently displayed, such as a video capture button.
- the user can click the drop-down button 409 to make the electronic device 401 display a video capture button, and then click the video capture button to make the electronic device display the video capture interface shown in FIG. 7 .
- Fig. 7 shows a schematic diagram of interaction of video interception provided by an embodiment of the present disclosure.
- the video capture interface 701 displays a preview window 702 of the target video, a duration selection panel 703 , a confirmation icon 704 and a cancel icon 705 .
- the user can drag the start time node 706 on the duration selection panel 703 to select the video frame corresponding to the start time of the video segment.
- the video frame displayed in the preview window 702 changes with the time stamp corresponding to the start time node 706 .
- the user confirms that the video interception is complete, he can click the confirmation icon 704 to make the electronic device use the video segment selected by the user as the intercepted target video, and display the intercepted target video in the preset playback interface shown in FIG. 4 .
- the cancel icon 705 to make the electronic device return to display the preset playback interface shown in FIG. 4 and keep displaying the target video before entering the video interception interface 701 .
- the audio sharing method may further include:
- material modification is performed on the visualized material according to a material modification method corresponding to the material modification operation.
- the presentation forms and material contents of various visual materials can be modified.
- the material modification method may include at least one of the following:
- modifying the material content of the visual material may include replacing the image in the visual material, changing the text in the visual material, adding new visual material (such as adding a new sticker, text) and deleting the existing visual material (such as deleting any existing images, stickers, text).
- the material modification operation may include a selection operation on an image to be replaced, an editing operation on a text to be changed, an addition operation on a visual material, and a deletion operation on a visual material.
- a text button 405 may be used to add new text
- the sticker button 406 can be used to add new stickers
- the special effect button 407 can be used to add special effects to the target video
- the filter button 408 can be used to add filters to the target video.
- Fig. 8 shows a schematic diagram of interaction of material modification provided by an embodiment of the present disclosure.
- the electronic device 801 can display a preset playback interface 802 in which a target video can be displayed.
- the target video can have a lyric sticker 803 , and the user can click the lyric sticker 803 to display a delete icon 804 .
- the user can click the delete icon 804 to delete the lyrics sticker 803 .
- Fig. 9 shows another schematic diagram of interaction of material modification provided by an embodiment of the present disclosure.
- the electronic device 901 may display a preset playback interface 902 , in which a target video may be displayed, and a text button 903 may also be displayed in the preset playback interface 902 .
- the user can click the text button 903 to make the electronic device 901 display the preset playback interface shown in FIG. 10 .
- Fig. 10 shows another schematic diagram of interaction of material modification provided by an embodiment of the present disclosure.
- the electronic device 1001 can display a preset playback interface 1002, and a target video can be displayed in the preset playback interface 1002, and the target video can have a newly added text box 1003, and the user can edit the desired video in the text box 1003. Added text.
- modifying the material style of the visualized material may include modifying a display form of the visualized material.
- the sticker can have different display forms, such as border, no border, wide border, narrow border, etc.
- the image can have different display forms, such as border, no border, wide border, narrow border, etc.
- lyrics stickers can have different display forms, such as different lyrics scrolling forms, different player shapes, and so on.
- the electronic device 801 may display a preset playback interface 802 , a target video may be displayed in the preset playback interface 802 , and the target video may have a lyrics sticker 803 .
- the user can long press the lyrics sticker 803 to change the display form of the lyrics sticker 803 . Wherein, each time the user presses the lyrics sticker 803 for a long time, the display form of the lyrics sticker 803 will be changed according to the preset changing order of the display forms.
- the electronic device 801 may display a preset playback interface 802 , a target video may be displayed in the preset playback interface 802 , and the target video may have a lyrics sticker 803 .
- the user can also click on the lyrics sticker 803 to cause it to display a refresh icon 806 .
- the user can click the refresh icon 806 to change the display form of the lyrics sticker 803 .
- the user can make a gesture operation of zooming in or zooming out on the visual material to be modified, so that the display size of the visual material changes according to the operation mode of the gesture operation.
- the user can drag the visual material to be moved, so that the display position of the visual material changes following the user's drag operation, and the visual material is finally displayed at the position where the user stops the drag operation.
- the display angle refers to a rotation angle of the visualized material.
- the electronic device 801 may display a preset playback interface 802 , a target video may be displayed in the preset playback interface 802 , and the target video may have a lyrics sticker 803 .
- the user can also click on the lyric sticker 803 to cause it to display a spin icon 805 .
- the user can click on the rotation icon 805 to change the rotation angle of the lyrics sticker 803 .
- the embodiment of the present disclosure also provides another audio sharing method.
- Fig. 11 shows a schematic flowchart of another audio sharing method provided by an embodiment of the present disclosure.
- the audio sharing method may include the following steps.
- the target audio is the published audio.
- a preset playback interface is displayed, and the target video with the target audio as the background music is displayed in the preset playback interface.
- the target video is used to share the target audio, and the target video includes The visualization material generated by the target audio.
- S1110-S1120 are similar to S110 and S120 in the foregoing embodiment, and will not be repeated here.
- a third trigger operation may be input to the electronic device.
- the third triggering operation may be an operation for triggering the sharing of the target video. After the electronic device detects the third trigger operation on the target video, it can share the target video.
- the third trigger operation may include a gesture control operation (such as click, long press, double-click, etc.), voice control operation, or expression control operation on the target video.
- a gesture control operation such as click, long press, double-click, etc.
- voice control operation such as voice, or expression control operation on the target video.
- the third triggering operation may also include a gesture control operation (such as click, long press, double-click, etc.), a voice control operation, or an expression control operation on the sharing control of the target video.
- the sharing control of the target video may be a control for triggering the sharing of the target video.
- the control may be an object that can be triggered by a user, such as a button or an icon.
- the sharing target video may include at least one of the following:
- the first application program may be any type of application program.
- the first application program can be a short video application program to which the target interactive interface belongs, and sharing the target video can be specifically publishing the target video in the short video application program to which the target interactive interface belongs, so that the target video can be distributed to users who use the short video Other users of the application are stored in the server of the short video application as a private video.
- a “publish” button 410 may be displayed in the preset playback interface.
- the user may click the “publish” button 410 to publish the target video as a daily video.
- the second application program may be any type of application program other than the first application program to which the target interactive interface belongs.
- the second application program may be a social application program other than the short video application program to which the target interactive interface belongs, and sharing the target video may specifically be posting the target video to a social platform of the social application program.
- the sharing of the target video can specifically be to the chat interface between the user and at least one target user in the first application program, to the chat interface between the user and at least one target user in the second application program, or to at least one target user through an instant messaging tool.
- a target user's communication account sends the target video.
- the target video can be shared in various forms, so that the user can publish the target video as a normal work, so as to gain positive feedback such as watching and interacting with others.
- Fig. 12 shows a schematic diagram of a playback interface of a target video provided by an embodiment of the present disclosure.
- the electronic device 1201 displays a video playback interface, and the video playback interface may display a target video 1202 automatically generated based on the song "Hula Dance XXX" released by Little C, which is finally edited.
- the audio sharing method may further include:
- the interactive information display control is superimposed on the target video, and the interactive information display control may be generated according to the interactive information.
- the user who watches the target video can publish the interaction information for the target video in the video playback interface of the target video, and after receiving the interaction information for the target video, the server can send the interaction information for the target video to the posting target video
- the electronic device of the user so that the electronic device can generate the interactive information display control of the target video according to the interactive information.
- the interactive information display control can display the interactive information for the target video, so that the user who posted the target video can view it in the target video Interaction information of the user who watched the target video and interacted with it.
- the interactive information display control can be used to interact with the information sender of the interactive information.
- the user posting the target video can perform interactions such as liking and commenting on the interactive information he is interested in and the information sender of the interactive information in the interactive information display control.
- the audio sharing method provided by the embodiment of the present disclosure can intelligently generate target videos in various presentation forms according to the target audio, and provides a simple and easy-to-operate video editing method, lowering the threshold for video production, even if the sharer Even if the creative ability is insufficient, they can also produce high-level videos, so that the produced videos can achieve their expected sharing effects.
- the target audio in the target video can be used as a public expression for viewing and interaction, thereby improving the user experience.
- An embodiment of the present disclosure also provides an audio sharing device, which will be described below with reference to FIG. 13 .
- the audio sharing device may be an electronic device.
- electronic equipment may include but not limited to mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs, PADs, PMPs, vehicle-mounted terminals (such as vehicle-mounted navigation terminals), wearable devices, etc., and mobile terminals such as digital TVs, desktop Stationary terminals for computers, smart home devices, and more.
- Fig. 13 shows a schematic structural diagram of an audio sharing device provided by an embodiment of the present disclosure.
- the audio sharing device 1300 may include a first display unit 1310 and a second display unit 1320 .
- the first display unit 1310 may be configured to display the target object in the target interaction interface, the target object includes the original video with the target audio to be shared as the background music and/or the share control of the target audio, and the target audio is the published audio.
- the second display unit 1320 can be configured to display a preset playback interface when the first trigger operation on the target object is detected.
- the target video with the target audio as the background music is displayed in the preset playback interface, and the target video is used for sharing.
- the target audio and the target video include visual material generated according to the target audio.
- the preset playback interface when the user triggers the audio sharing of the target audio, the preset playback interface can be displayed directly, and the target video with the target audio as background music automatically generated according to the target audio can be displayed in the preset playback interface , using the target video to share the target audio can not only make the shared content rich and colorful, meet the individual needs of users, but also lower the threshold of video production, so that users do not need to shoot or upload videos, but also conveniently realize video content Make audio sharing.
- the visual material may include an image and/or text generated according to the associated information of the target audio.
- the visual material may include a first visual material
- the first visual material may include an image generated according to associated information
- the associated information may include an associated image of the target audio
- the associated image may include an audio cover and a publisher's avatar at least one of the
- the first visual material may include at least one of the following:
- the background material includes an image generated according to the image characteristics of the associated image, and the background material is displayed in the first screen area of the target video;
- the foreground material includes an associated image or an image generated according to the image characteristics of the associated image, and the foreground material is displayed in the second screen area of the target video;
- the second picture area is included in the first picture area, and the first picture area is a background display area of the target video.
- the visual material may include a second visual material, and the second visual material may include a background image selected in the material library and matched with associated information, and the associated information may include an audio tag of the target audio.
- the visualized material may include a third visualized material, and the third visualized material may have an animation effect generated according to associated information, and the associated information may include musical characteristics of the target audio.
- the visual material may include a fourth visual material, and the fourth visual material may include associated information, and the associated information may include audio text of the target audio.
- the audio text may include at least one of first text information associated with the target audio and second text information obtained by performing speech recognition on the target audio.
- the visualized material may include a plurality of sequence images sequentially displayed at preset time intervals.
- the preset time interval may be determined according to the audio duration of the target audio and the number of sequence images, or may be determined according to the audio rhythm of the target audio.
- the preset playback interface can also be used to edit the target video.
- the audio sharing device 1300 may also include a first processing unit, which may be configured to perform the video capture operation on the target video when a video capture operation on the target video is detected after the preset playback interface is displayed.
- the video segment selected in is used as the target video after interception.
- the preset playback interface can also be used to edit the target video.
- the audio sharing device 1300 may further include a second processing unit, and the second processing unit may be configured to, after displaying the preset playback interface, when detecting a material modification operation on the visualized material, operate the corresponding material according to the material modification operation.
- the modification method is to modify the material of the visual material.
- the material modification method may include at least one of the following: modifying the material content of the visualized material, modifying the material style of the visualized material, modifying the display size of the visualized material, modifying the display position of the visualized material, and modifying the display angle of the visualized material.
- the target interaction interface may include an audio presentation interface
- the target object may include a share control of the target audio
- the audio sharing device 1300 may also include a third display unit, which may be configured to display an audio presentation of the original video and the target audio with the target audio as the background music in the video playback interface before displaying the target object controls.
- a third display unit which may be configured to display an audio presentation of the original video and the target audio with the target audio as the background music in the video playback interface before displaying the target object controls.
- the first display unit may be further configured to display an audio display interface when a second trigger operation on the audio display control is detected, and the audio display interface displays a sharing control.
- the target interaction interface may include a video playback interface
- the target object may include an original video with target audio as background music
- the first trigger operation may include a trigger operation on the original video
- the target interaction interface may include a video playback interface
- the target object may include an original video
- the original video may include an audio control of the target audio
- the first trigger operation may include a trigger operation on the audio control
- the target interaction interface may include a video playback interface
- the target object may include share controls for the original video and target audio
- the first trigger operation may include a trigger operation on the share control
- the audio sharing device 1300 may also include a video sharing unit, and the video sharing unit may be configured to share the target video when a third trigger operation on the target video is detected after the preset playback interface is displayed. video.
- sharing the target video may include at least one of the following: publishing the target video in the first application program to which the target interactive interface belongs, publishing the target video in a second application program other than the first application program, and sending the target video to at least one target user Send target video.
- the audio sharing device 1300 shown in FIG. 13 can execute each step in the method embodiment shown in FIG. 1 to FIG. 12 , and realize each process and effects, which will not be described here.
- An embodiment of the present disclosure also provides an electronic device, which may include a processor and a memory, and the memory may be used to store executable instructions.
- the processor may be configured to read executable instructions from the memory, and execute the executable instructions to implement the audio sharing method in the above-mentioned embodiments.
- Fig. 14 shows a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure. Referring specifically to FIG. 14 , it shows a schematic structural diagram of an electronic device 1400 suitable for implementing an embodiment of the present disclosure.
- the electronic device 1400 in the embodiment of the present disclosure may be an electronic device.
- electronic equipment may include but not limited to mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs, PADs, PMPs, vehicle-mounted terminals (such as vehicle-mounted navigation terminals), wearable devices, etc., and mobile terminals such as digital TVs, desktop Stationary terminals for computers, smart home devices, and more.
- the electronic device 1400 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 1401, which may be stored in a read-only memory (ROM) 1402 or loaded into a random access device from a storage device 1408. Various appropriate actions and processes are executed by accessing programs in the memory (RAM) 1403 . In the RAM 1403, various programs and data necessary for the operation of the electronic device 1400 are also stored.
- the processing device 1401, ROM 1402, and RAM 1403 are connected to each other through a bus 1404.
- An input/output (I/O) interface 1405 is also connected to the bus 1404 .
- the following devices can be connected to the I/O interface 1405: input devices 1406 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 1407 such as a computer; a storage device 1408 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1409.
- the communication means 1409 may allow the electronic device 1400 to perform wireless or wired communication with other devices to exchange data. While FIG. 14 shows electronic device 1400 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
- An embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the audio sharing method in the above-mentioned embodiment.
- embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from a network via communication means 1409, or from storage means 1408, or from ROM 1402.
- the processing device 1401 When the computer program is executed by the processing device 1401, the above functions defined in the audio sharing method of the embodiment of the present disclosure are executed.
- the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the above two.
- a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
- Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
- clients and servers can communicate using any currently known or future developed network protocol, such as HTTP, and can be interconnected with any form or medium of digital data communication (eg, a communication network).
- a communication network examples include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
- LANs local area networks
- WANs wide area networks
- Internet internetworks
- peer-to-peer networks e.g., ad hoc peer-to-peer networks
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
- the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device is made to execute:
- the target object includes the original video and/or the sharing control of the target audio with the target audio to be shared as the background music, and the target audio is the published audio; when the first trigger to the target object is detected During operation, a preset playback interface is displayed, and a target video with the target audio as background music is displayed in the preset playback interface.
- the target video is used to share the target audio, and the target video includes visual materials generated according to the target audio.
- computer program codes for performing the operations of the present disclosure may be written in one or more programming languages or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and also conventional procedural programming languages—such as "C" or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
- the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
- FPGAs Field Programmable Gate Arrays
- ASICs Application Specific Integrated Circuits
- ASSPs Application Specific Standard Products
- SOCs System on Chips
- CPLD Complex Programmable Logical device
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage or any suitable combination of the foregoing.
- An embodiment of the present disclosure further provides a computer program, including instructions, and when the instructions are executed by a processor, the audio sharing method in the above-mentioned embodiments is implemented.
- An embodiment of the present disclosure further provides a computer program product, where the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by a processor, the audio sharing method in the foregoing embodiment is implemented.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
Claims (26)
- 一种音频分享方法,包括:在目标交互界面内显示目标对象,所述目标对象包括以待分享的目标音频为背景音乐的原始视频和/或所述目标音频的分享控件,所述目标音频为已发布音频;以及当检测到对所述目标对象的第一触发操作时,显示预设播放界面,所述预设播放界面内显示有以所述目标音频为背景音乐的目标视频,所述目标视频用于分享所述目标音频,所述目标视频包括根据所述目标音频生成的可视化素材。
- 根据权利要求1所述的方法,其中,所述可视化素材包括根据所述目标音频的关联信息生成的图像和/或文本。
- 根据权利要求2所述的方法,其中,所述可视化素材包括第一可视化素材,所述第一可视化素材包括根据所述关联信息生成的图像,所述关联信息包括所述目标音频的关联图像。
- 根据权利要求3所述的方法,其中,所述关联图像包括音频封面和发布者头像中的至少一种。
- 根据权利要求3或4所述的方法,其中,所述第一可视化素材包括下列中的至少一项:背景素材,所述背景素材包括根据所述关联图像的图像特征生成的图像,所述背景素材显示在所述目标视频的第一画面区域;前景素材,所述前景素材包括所述关联图像或者根据所述关联图像的图像特征生成的图像,所述前景素材显示在所述目标视频的第二画面区域;其中,所述第二画面区域的至少部分包含于第一画面区域内,所述第一画面区域为所述目标视频的背景显示区域。
- 根据权利要求2所述的方法,其中,所述可视化素材包括第二可视化素材,所述第二可视化素材包括在素材库中选择的与所述关联信息相匹配的背景图像,所述关联信息包括所述目标音频的音频标签。
- 根据权利要求2所述的方法,其中,所述可视化素材包括第三可视化素材,所述第三可视化素材具有根据所述关联信息生成的动画效果,所述关联信息包括所述目标音频的 乐理特性。
- 根据权利要求2所述的方法,其中,所述可视化素材包括第四可视化素材,所述第四可视化素材包括所述关联信息,所述关联信息包括所述目标音频的音频文本。
- 根据权利要求8所述的方法,其中,所述音频文本包括与所述目标音频关联的第一文本信息和对所述目标音频进行语音识别得到的第二文本信息中的至少一个。
- 根据权利要求1-9中任一项所述的方法,其中,所述可视化素材包括多个按照预设时间间隔依次显示的序列图像。
- 根据权利要求10所述的方法,其中,所述预设时间间隔根据所述目标音频的音频时长和所述序列图像的数量确定,或者根据所述目标音频的音频节奏确定。
- 根据权利要求1-11中任一项所述的方法,其中,所述预设播放界面还用于对所述目标视频进行编辑;其中,在所述显示预设播放界面之后,所述方法还包括:当检测到对所述目标视频的视频截取操作时,将所述视频截取操作在所述目标视频中选择的视频片段作为截取后的目标视频。
- 根据权利要求1-11中任一项所述的方法,其中,所述预设播放界面还用于对所述目标视频进行编辑;其中,在所述显示预设播放界面之后,所述方法还包括:当检测到对所述可视化素材的素材修改操作时,按照所述素材修改操作对应的素材修改方式,对所述可视化素材进行素材修改。
- 根据权利要求13所述的方法,其中,所述素材修改方式包括下列中的至少一项:修改所述可视化素材的素材内容、修改所述可视化素材的素材风格、修改所述可视化素材的显示尺寸、修改所述可视化素材的显示位置、修改所述可视化素材的显示角度。
- 根据权利要求1-14中任一项所述的方法,其中,所述目标交互界面包括音频展示界面,所述目标对象包括所述目标音频的分享控件;其中,在所述在目标交互界面内显示目标对象之前,所述方法还包括:在视频播放界面内显示以所述目标音频为背景音乐的原始视频和所述目标音频的音频展示控件;其中,所述在目标交互界面内显示目标对象,包括:当检测到对所述音频展示控件的第二触发操作时,显示所述音频展示界面,所述音频 展示界面显示有所述分享控件。
- 根据权利要求1-15中任一项所述的方法,其中,所述目标交互界面包括视频播放界面,所述目标对象包括以所述目标音频为背景音乐的原始视频,所述第一触发操作包括对所述原始视频的触发操作;或,所述目标交互界面包括视频播放界面,所述目标对象包括所述原始视频,所述原始视频包括所述目标音频的音频控件,所述第一触发操作包括对所述音频控件的触发操作;或,所述目标交互界面包括视频播放界面,所述目标对象包括所述原始视频和所述目标音频的分享控件,所述第一触发操作包括对所述分享控件的触发操作。
- 根据权利要求1-16中任一项所述的方法,其中,在所述显示预设播放界面之后,所述方法还包括:当检测到对所述目标视频的第三触发操作时,分享所述目标视频。
- 根据权利要求17所述的方法,其中,所述分享所述目标视频包括下列中的至少一项:在所述目标交互界面所属的第一应用程序内发布所述目标视频、在所述第一应用程序以外的第二应用程序内发布所述目标视频、向至少一个目标用户发送所述目标视频。
- 一种音频分享装置,包括:第一显示单元,配置为在目标交互界面内显示目标对象,所述目标对象包括以待分享的目标音频为背景音乐的原始视频和/或所述目标音频的分享控件,所述目标音频为已发布音频;以及第二显示单元,配置为当检测到对所述目标对象的第一触发操作时,显示预设播放界面,所述预设播放界面内显示有以所述目标音频为背景音乐的目标视频,所述目标视频用于分享所述目标音频,所述目标视频包括根据所述目标音频生成的可视化素材。
- 根据权利要求19所述的音频分享装置,其中,所述预设播放界面还用于对所述目标视频进行编辑;其中,所述音频分享装置还包括第一处理单元,所述第一处理单元配置为在所述显示预设播放界面之后,当检测到对所述目标视频的视频截取操作时,将所述视频截取操作在所述目标视频中选择的视频片段作为截取后的目标视频。
- 根据权利要求19所述的音频分享装置,其中,所述预设播放界面还用于对所述目标视频进行编辑;其中,所述音频分享装置还包括第二处理单元,所述第二处理单元配置为在所述显示 预设播放界面之后,当检测到对所述可视化素材的素材修改操作时,按照所述素材修改操作对应的素材修改方式,对所述可视化素材进行素材修改。
- 根据权利要求19-21中任一项所述的音频分享装置,还包括视频分享单元,所述视频分享单元配置为在所述显示预设播放界面之后,当检测到对所述目标视频的第三触发操作时,分享所述目标视频。
- 一种电子设备,包括:处理器;存储器,用于存储可执行指令;其中,所述处理器用于从所述存储器中读取所述可执行指令,并执行所述可执行指令以实现上述权利要求1-18中任一项所述的音频分享方法。
- 一种计算机可读存储介质,其中,所述存储介质存储有计算机程序,当所述计算机程序被处理器执行时,使得处理器实现上述权利要求1-18中任一项所述的音频分享方法。
- 一种计算机程序,包括指令,所述指令被处理器执行时实现上述权利要求1-18中任一项所述的音频分享方法。
- 一种计算机程序产品,包括计算机程序或指令,所述计算机程序或指令被处理器执行时实现上述权利要求1-18中任一项所述的音频分享方法。
Priority Applications (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2023574227A JP7732001B2 (ja) | 2021-06-02 | 2022-05-30 | オーディオ共有方法、装置、機器及び媒体 |
| BR112023025320A BR112023025320A2 (pt) | 2021-06-02 | 2022-05-30 | Método e aparelho de compartilhamento de áudio, dispositivo e meio |
| EP22815203.9A EP4336846A4 (en) | 2021-06-02 | 2022-05-30 | AUDIO SHARING METHOD AND APPARATUS, DEVICE, AND MEDIUM |
| US18/499,903 US12271578B2 (en) | 2021-06-02 | 2023-11-01 | Audio sharing method and apparatus, device and medium |
| US19/086,594 US20250217017A1 (en) | 2021-06-02 | 2025-03-21 | Audio sharing method and apparatus, device and medium |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110615705.4A CN113365134B (zh) | 2021-06-02 | 2021-06-02 | 音频分享方法、装置、设备及介质 |
| CN202110615705.4 | 2021-06-02 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US18/499,903 Continuation US12271578B2 (en) | 2021-06-02 | 2023-11-01 | Audio sharing method and apparatus, device and medium |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2022253157A1 true WO2022253157A1 (zh) | 2022-12-08 |
Family
ID=77531662
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2022/095845 Ceased WO2022253157A1 (zh) | 2021-06-02 | 2022-05-30 | 音频分享方法、装置、设备及介质 |
Country Status (6)
| Country | Link |
|---|---|
| US (2) | US12271578B2 (zh) |
| EP (1) | EP4336846A4 (zh) |
| JP (1) | JP7732001B2 (zh) |
| CN (1) | CN113365134B (zh) |
| BR (1) | BR112023025320A2 (zh) |
| WO (1) | WO2022253157A1 (zh) |
Families Citing this family (14)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112862927B (zh) * | 2021-01-07 | 2023-07-25 | 北京字跳网络技术有限公司 | 用于发布视频的方法、装置、设备和介质 |
| CN113365134B (zh) * | 2021-06-02 | 2022-11-01 | 北京字跳网络技术有限公司 | 音频分享方法、装置、设备及介质 |
| CN113901776B (zh) * | 2021-10-13 | 2025-05-27 | 杭州网易云音乐科技有限公司 | 音频交互方法、介质、装置和计算设备 |
| USD1042490S1 (en) * | 2021-10-22 | 2024-09-17 | Beijing Zitiao Network Technology Co., Ltd. | Display screen or portion thereof with a graphical user interface |
| CN113885830B (zh) * | 2021-10-25 | 2024-07-02 | 北京字跳网络技术有限公司 | 一种音效展示方法及终端设备 |
| CN114154003B (zh) * | 2021-11-11 | 2024-10-25 | 北京达佳互联信息技术有限公司 | 图片的获取方法、装置及电子设备 |
| CN114329223A (zh) * | 2022-01-04 | 2022-04-12 | 北京字节跳动网络技术有限公司 | 媒体内容搜索方法、装置、设备及介质 |
| CN115038020B (zh) * | 2022-06-09 | 2026-02-24 | 杭州网易云音乐科技有限公司 | 音频播放方法、装置、电子设备和存储介质 |
| CN119537618B (zh) * | 2022-06-28 | 2025-10-24 | 北京字跳网络技术有限公司 | 媒体内容展示方法、装置、设备、存储介质及产品 |
| CN115103219A (zh) * | 2022-07-01 | 2022-09-23 | 抖音视界(北京)有限公司 | 音频发布方法、装置和计算机可读存储介质 |
| CN115174536B (zh) * | 2022-07-01 | 2025-09-19 | 抖音视界(北京)有限公司 | 音频的播放方法、装置和非易失性计算机可读存储介质 |
| CN115103232B (zh) * | 2022-07-07 | 2023-12-08 | 北京字跳网络技术有限公司 | 一种视频播放方法、装置、设备和存储介质 |
| CN115643244B (zh) * | 2022-10-21 | 2025-12-23 | 杭州网易云音乐科技有限公司 | 多媒体数据的分享方法、装置、设备及介质 |
| CN118573917B (zh) * | 2024-08-01 | 2025-02-25 | 北京达佳互联信息技术有限公司 | 物品分享方法及装置、电子设备及计算机可读存储介质 |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103945008A (zh) * | 2014-05-08 | 2014-07-23 | 百度在线网络技术(北京)有限公司 | 一种网络信息分享的方法及装置 |
| CN109144346A (zh) * | 2018-09-09 | 2019-01-04 | 广州酷狗计算机科技有限公司 | 歌曲分享方法、装置及存储介质 |
| CN109327608A (zh) * | 2018-09-12 | 2019-02-12 | 广州酷狗计算机科技有限公司 | 歌曲分享的方法、终端、服务器和系统 |
| WO2019114516A1 (zh) * | 2017-12-15 | 2019-06-20 | 腾讯科技(深圳)有限公司 | 媒体信息的展示方法和装置、存储介质、电子装置 |
| CN112069360A (zh) * | 2020-09-15 | 2020-12-11 | 北京字跳网络技术有限公司 | 音乐海报生成方法、装置、电子设备及介质 |
| CN112579826A (zh) * | 2020-12-07 | 2021-03-30 | 北京字节跳动网络技术有限公司 | 视频显示及处理方法、装置、系统、设备、介质 |
| CN113365134A (zh) * | 2021-06-02 | 2021-09-07 | 北京字跳网络技术有限公司 | 音频分享方法、装置、设备及介质 |
Family Cites Families (46)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030236836A1 (en) * | 2002-03-21 | 2003-12-25 | Borthwick Ernest Mark | System and method for the design and sharing of rich media productions via a computer network |
| US8364633B2 (en) * | 2005-01-12 | 2013-01-29 | Wandisco, Inc. | Distributed computing systems and system components thereof |
| US8347213B2 (en) * | 2007-03-02 | 2013-01-01 | Animoto, Inc. | Automatically generating audiovisual works |
| US20140328570A1 (en) * | 2013-01-09 | 2014-11-06 | Sri International | Identifying, describing, and sharing salient events in images and videos |
| CN103793446B (zh) * | 2012-10-29 | 2019-03-01 | 汤晓鸥 | 音乐视频的生成方法和系统 |
| CN103885962A (zh) * | 2012-12-20 | 2014-06-25 | 腾讯科技(深圳)有限公司 | 一种图片处理方法及服务器 |
| US10481959B2 (en) * | 2014-07-03 | 2019-11-19 | Spotify Ab | Method and system for the identification of music or other audio metadata played on an iOS device |
| US20180374461A1 (en) * | 2014-08-22 | 2018-12-27 | Zya, Inc, | System and method for automatically generating media |
| US9632664B2 (en) * | 2015-03-08 | 2017-04-25 | Apple Inc. | Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback |
| US9583142B1 (en) * | 2015-07-10 | 2017-02-28 | Musically Inc. | Social media platform for creating and sharing videos |
| CN105159639B (zh) * | 2015-08-21 | 2018-07-27 | 小米科技有限责任公司 | 音频封面显示方法及装置 |
| US20190026366A1 (en) * | 2016-01-07 | 2019-01-24 | Mfu Co., Inc | Method and device for playing video by each segment of music |
| KR102412283B1 (ko) * | 2016-02-17 | 2022-06-23 | 삼성전자 주식회사 | 전자 장치 및 전자 장치의 영상 공유 제어 방법 |
| US10474422B1 (en) * | 2016-04-18 | 2019-11-12 | Look Sharp Labs, Inc. | Music-based social networking multi-media application and related methods |
| GB2557970B (en) * | 2016-12-20 | 2020-12-09 | Mashtraxx Ltd | Content tracking system and method |
| US11038932B2 (en) * | 2016-12-31 | 2021-06-15 | Turner Broadcasting System, Inc. | System for establishing a shared media session for one or more client devices |
| US10835802B1 (en) * | 2017-02-07 | 2020-11-17 | The Catherine Mayer Foundation | Physiological response management using computer-implemented activities |
| US20190020699A1 (en) * | 2017-07-16 | 2019-01-17 | Tsunami VR, Inc. | Systems and methods for sharing of audio, video and other media in a collaborative virtual environment |
| US11893999B1 (en) * | 2018-05-13 | 2024-02-06 | Amazon Technologies, Inc. | Speech based user recognition |
| CN108900902B (zh) * | 2018-07-06 | 2020-06-09 | 北京微播视界科技有限公司 | 确定视频背景音乐的方法、装置、终端设备及存储介质 |
| CN108668164A (zh) * | 2018-07-12 | 2018-10-16 | 北京微播视界科技有限公司 | 选择背景音乐拍摄视频的方法、装置、终端设备及介质 |
| CN108600825B (zh) * | 2018-07-12 | 2019-10-25 | 北京微播视界科技有限公司 | 选择背景音乐拍摄视频的方法、装置、终端设备和介质 |
| CN109451343A (zh) * | 2018-11-20 | 2019-03-08 | 广州酷狗计算机科技有限公司 | 视频分享方法、装置、终端及存储介质 |
| CN109615682A (zh) * | 2018-12-07 | 2019-04-12 | 北京微播视界科技有限公司 | 动画生成方法、装置、电子设备及计算机可读存储介质 |
| US11113462B2 (en) * | 2018-12-19 | 2021-09-07 | Rxprism Health Systems Private Ltd | System and method for creating and sharing interactive content rapidly anywhere and anytime |
| SG11202106395TA (en) * | 2018-12-19 | 2021-07-29 | RxPrism Health Systems Pvt Ltd | A system and a method for creating and sharing interactive content rapidly anywhere and anytime |
| US11812102B2 (en) * | 2019-01-04 | 2023-11-07 | Gracenote, Inc. | Generation of media station previews using a reference database |
| US11854538B1 (en) * | 2019-02-15 | 2023-12-26 | Amazon Technologies, Inc. | Sentiment detection in audio data |
| EP3959896A1 (en) * | 2019-04-22 | 2022-03-02 | Soclip! | Automated audio-video content generation |
| DK201970533A1 (en) * | 2019-05-31 | 2021-02-15 | Apple Inc | Methods and user interfaces for sharing audio |
| CN110233976B (zh) * | 2019-06-21 | 2022-09-09 | 广州酷狗计算机科技有限公司 | 视频合成的方法及装置 |
| CN112449231B (zh) * | 2019-08-30 | 2023-02-03 | 腾讯科技(深圳)有限公司 | 多媒体文件素材的处理方法、装置、电子设备及存储介质 |
| CN112822563A (zh) * | 2019-11-15 | 2021-05-18 | 北京字节跳动网络技术有限公司 | 生成视频的方法、装置、电子设备和计算机可读介质 |
| CN110868633A (zh) * | 2019-11-27 | 2020-03-06 | 维沃移动通信有限公司 | 一种视频处理方法及电子设备 |
| CN110798737A (zh) | 2019-11-29 | 2020-02-14 | 北京达佳互联信息技术有限公司 | 视音频合成方法、终端和存储介质 |
| US11763806B1 (en) * | 2020-06-25 | 2023-09-19 | Amazon Technologies, Inc. | Speaker recognition adaptation |
| CN111935537A (zh) * | 2020-06-30 | 2020-11-13 | 百度在线网络技术(北京)有限公司 | 音乐短片视频生成方法、装置、电子设备和存储介质 |
| CN111970571B (zh) * | 2020-08-24 | 2022-07-26 | 北京字节跳动网络技术有限公司 | 视频制作方法、装置、设备及存储介质 |
| US12273568B2 (en) * | 2020-08-27 | 2025-04-08 | Warner Bros. Entertainment Inc. | Social video platform for generating and experiencing content |
| US11372524B1 (en) * | 2020-09-02 | 2022-06-28 | Amazon Technologies, Inc. | Multimedia communications with synchronized graphical user interfaces |
| US20220114210A1 (en) * | 2020-09-08 | 2022-04-14 | Sbl Venture Capital, Llc | Social media video sharing and cyberpersonality building system |
| CN112188266A (zh) * | 2020-09-24 | 2021-01-05 | 北京达佳互联信息技术有限公司 | 视频生成方法、装置及电子设备 |
| US11647058B2 (en) * | 2021-08-20 | 2023-05-09 | Avaya Management L.P. | Screen, video, audio, and text sharing in multiparty video conferences |
| US11762052B1 (en) * | 2021-09-15 | 2023-09-19 | Amazon Technologies, Inc. | Sound source localization |
| US12386889B2 (en) * | 2022-02-23 | 2025-08-12 | International Business Machines Corporation | Generating personalized digital thumbnails |
| US12340563B2 (en) * | 2022-05-11 | 2025-06-24 | Adobe Inc. | Self-supervised audio-visual learning for correlating music and video |
-
2021
- 2021-06-02 CN CN202110615705.4A patent/CN113365134B/zh active Active
-
2022
- 2022-05-30 BR BR112023025320A patent/BR112023025320A2/pt unknown
- 2022-05-30 EP EP22815203.9A patent/EP4336846A4/en active Pending
- 2022-05-30 JP JP2023574227A patent/JP7732001B2/ja active Active
- 2022-05-30 WO PCT/CN2022/095845 patent/WO2022253157A1/zh not_active Ceased
-
2023
- 2023-11-01 US US18/499,903 patent/US12271578B2/en active Active
-
2025
- 2025-03-21 US US19/086,594 patent/US20250217017A1/en active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103945008A (zh) * | 2014-05-08 | 2014-07-23 | 百度在线网络技术(北京)有限公司 | 一种网络信息分享的方法及装置 |
| WO2019114516A1 (zh) * | 2017-12-15 | 2019-06-20 | 腾讯科技(深圳)有限公司 | 媒体信息的展示方法和装置、存储介质、电子装置 |
| CN109144346A (zh) * | 2018-09-09 | 2019-01-04 | 广州酷狗计算机科技有限公司 | 歌曲分享方法、装置及存储介质 |
| CN109327608A (zh) * | 2018-09-12 | 2019-02-12 | 广州酷狗计算机科技有限公司 | 歌曲分享的方法、终端、服务器和系统 |
| CN112069360A (zh) * | 2020-09-15 | 2020-12-11 | 北京字跳网络技术有限公司 | 音乐海报生成方法、装置、电子设备及介质 |
| CN112579826A (zh) * | 2020-12-07 | 2021-03-30 | 北京字节跳动网络技术有限公司 | 视频显示及处理方法、装置、系统、设备、介质 |
| CN113365134A (zh) * | 2021-06-02 | 2021-09-07 | 北京字跳网络技术有限公司 | 音频分享方法、装置、设备及介质 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP4336846A4 * |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2024523812A (ja) | 2024-07-02 |
| US20240061560A1 (en) | 2024-02-22 |
| US12271578B2 (en) | 2025-04-08 |
| CN113365134A (zh) | 2021-09-07 |
| US20250217017A1 (en) | 2025-07-03 |
| EP4336846A4 (en) | 2024-10-30 |
| BR112023025320A2 (pt) | 2024-02-27 |
| JP7732001B2 (ja) | 2025-09-01 |
| CN113365134B (zh) | 2022-11-01 |
| EP4336846A1 (en) | 2024-03-13 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN113365134B (zh) | 音频分享方法、装置、设备及介质 | |
| US12374012B2 (en) | Video sharing method and apparatus, device, and medium | |
| JP7572108B2 (ja) | 議事録のインタラクション方法、装置、機器及び媒体 | |
| JP7711230B2 (ja) | 表示方法、装置、デバイスおよび記憶媒体 | |
| US12177512B2 (en) | Video processing method and apparatus, electronic device, and storage medium | |
| CN113852767B (zh) | 视频编辑方法、装置、设备及介质 | |
| KR20220103110A (ko) | 비디오 생성 장치 및 방법, 전자 장치, 및 컴퓨터 판독가능 매체 | |
| US12112772B2 (en) | Method and apparatus for video production, device and storage medium | |
| WO2020207106A1 (zh) | 关注用户的信息展示方法、装置、设备及存储介质 | |
| WO2023134419A1 (zh) | 信息交互方法、装置、设备及存储介质 | |
| WO2021057740A1 (zh) | 视频生成方法、装置、电子设备和计算机可读介质 | |
| CN115379136A (zh) | 特效道具处理方法、装置、电子设备及存储介质 | |
| CN115981769A (zh) | 页面显示方法、装置、设备、计算机可读存储介质及产品 | |
| WO2022252916A1 (zh) | 特效配置文件的生成方法、装置、设备及介质 | |
| CN110312162A (zh) | 精选片段处理方法、装置、电子设备及可读介质 | |
| CN113553466A (zh) | 页面展示方法、装置、介质和计算设备 | |
| US20200413003A1 (en) | Method and device for processing multimedia information, electronic equipment and computer-readable storage medium | |
| WO2023174073A1 (zh) | 视频生成方法、装置、设备、存储介质和程序产品 | |
| US20240329812A1 (en) | Method, apparatus, device and storage medium for page processing | |
| WO2024007834A1 (zh) | 视频播放方法、装置、设备和存储介质 | |
| CN114063863A (zh) | 视频处理方法、装置及电子设备 | |
| CN116088973A (zh) | 基于多媒体资源的信息交互方法、装置、设备及介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22815203 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2023574227 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2022815203 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 202327083259 Country of ref document: IN |
|
| ENP | Entry into the national phase |
Ref document number: 2022815203 Country of ref document: EP Effective date: 20231205 |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112023025320 Country of ref document: BR |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 112023025320 Country of ref document: BR Kind code of ref document: A2 Effective date: 20231201 |