WO2021031701A1 - 显示控制方法及终端设备 - Google Patents
显示控制方法及终端设备 Download PDFInfo
- Publication number
- WO2021031701A1 WO2021031701A1 PCT/CN2020/099039 CN2020099039W WO2021031701A1 WO 2021031701 A1 WO2021031701 A1 WO 2021031701A1 CN 2020099039 W CN2020099039 W CN 2020099039W WO 2021031701 A1 WO2021031701 A1 WO 2021031701A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- control
- input
- target
- scene information
- terminal device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/52—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail for supporting social networking services
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/04—Real-time or near real-time messaging, e.g. instant messaging [IM]
- H04L51/046—Interoperability with other network applications or services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
Definitions
- the embodiments of the present disclosure relate to the field of communication technologies, and in particular, to a display control method and terminal equipment.
- the user can operate the emoticon package control in the communication window to trigger the terminal device to display one or more lists of emoticons on the screen, such as a list of emoticons in favorites, a list of emoticons saved locally, and A list of emoticons recommended by the network side device, etc.
- the user can select the desired emoticon package from the list of these emoticon packages, and trigger the terminal device to send the emoticon package in the communication window.
- one or more emoticon pack lists generally include a large number of emoticon packs, and the arrangement of these emoticon packs is disorderly, the user may need to go through the emoticon packs in these lists several times to find them
- the emoji package for user needs. That is, the process of finding the emoji package required by the user is cumbersome and time-consuming.
- the embodiments of the present disclosure provide a display control method and a terminal device, so as to solve the problem of cumbersome and time-consuming process of searching for emoticons required by users.
- an embodiment of the present disclosure provides a display control method, the method includes: receiving a first input from a user when a first communication window is displayed on a screen of a terminal device; At least one first control is displayed on the upper side; wherein, at least one first control is associated with the display content of the first communication window, and each first control in the at least one first control corresponds to at least one object, and the first communication window
- the display content includes at least one of a communication title, communication content, and communication target.
- the embodiments of the present disclosure also provide a terminal device, the terminal device includes: a receiving module and a display module; the receiving module is configured to receive the user when the first communication window is displayed on the screen of the terminal device A display module for displaying at least one first control on the screen in response to the first input received by the receiving module; wherein the at least one first control is associated with the display content of the first communication window, and at least Each first control in one first control corresponds to at least one object, and the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- embodiments of the present disclosure provide a terminal device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
- the computer program is executed by the processor to achieve the following On the one hand, the steps of the display control method are described.
- the embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
- the computer program is executed by a processor, the steps of the display control method as described in the first aspect are implemented. .
- the user's first input may be received; and in response to the first input, the display related to the display content of the first communication window is displayed on the screen.
- At least one first control connected.
- each first control in the at least one first control corresponds to at least one object
- the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- the terminal device can display at least one first control associated with the current display content in a small number, instead of directly displaying disorderly and large numbers Emoticons and other objects, so that the user can quickly and conveniently trigger the terminal device to acquire at least one object corresponding to the first target control, such as an emoticon that meets the needs of the user. Furthermore, the step of searching for the emoticon package multiple times in the disordered and large number of emoticon packages is avoided, that is, the step of searching for the emoticon package is simplified.
- FIG. 1 is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure
- FIG. 2 is a schematic flowchart of a display control method provided by an embodiment of the disclosure
- FIG. 3 is one of the schematic diagrams of displaying content of a terminal device provided by an embodiment of the disclosure.
- FIG. 4 is the second schematic diagram of displaying content of a terminal device according to an embodiment of the disclosure.
- FIG. 5 is a schematic structural diagram of a possible terminal device provided by an embodiment of the disclosure.
- FIG. 6 is a schematic diagram of the hardware structure of a terminal device provided by an embodiment of the disclosure.
- A/B can mean A or B
- the "and/or” in this article is only an association relationship describing associated objects, which means that there may be three A relationship, for example, A and/or B, can mean that: A alone exists, A and B exist at the same time, and B exists alone.
- Multiple means two or more than two.
- first and second in the specification and claims of the present disclosure are used to distinguish different objects, rather than to describe a specific order of objects.
- first input and the second input are used to distinguish different inputs, rather than to describe a specific order of input.
- the display control method provided by the embodiment of the present disclosure can receive the first input of the user when the first communication window is displayed on the screen of the terminal device; and in response to the first input, display the connection with the first communication window on the screen. At least one first control associated with the content is displayed. Wherein, each first control in the at least one first control corresponds to at least one object, and the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- the terminal device can display at least one first control associated with the current display content in a small number, instead of directly displaying disorderly and large numbers Emoticons and other objects, so that the user can quickly and conveniently trigger the terminal device to acquire at least one object corresponding to the first target control, such as an emoticon that meets the needs of the user. Furthermore, the step of searching for the emoticon package multiple times in the disordered and large number of emoticon packages is avoided, that is, the step of searching for the emoticon package is simplified.
- the terminal device in the embodiment of the present disclosure may be a mobile terminal device or a non-mobile terminal device.
- the mobile terminal device can be a mobile phone, tablet computer, notebook computer, handheld computer, car terminal, wearable device, ultra-mobile personal computer (UMPC), netbook or personal digital assistant (personal digital assistant, PDA), etc.
- the non-mobile terminal device may be a personal computer (PC), a television (television, TV), a teller machine, or a self-service machine, etc.; the embodiment of the present disclosure does not specifically limit it.
- the execution subject may be a terminal device, or the central processing unit (CPU) of the terminal device, or the terminal device for performing display control The control module of the method.
- the terminal device executes the display control method as an example to illustrate the display control method provided by the embodiment of the present disclosure.
- the terminal device in the embodiment of the present disclosure may be a terminal device with an operating system.
- the operating system may be an Android operating system, an ios operating system, or other possible operating systems, which are not specifically limited in the embodiment of the present disclosure.
- the following takes the Android operating system as an example to introduce the software environment to which the display control method provided in the embodiments of the present disclosure is applied.
- FIG. 1 it is a schematic structural diagram of a possible Android operating system provided by an embodiment of the present disclosure.
- the architecture of the Android operating system includes 4 layers, namely: application layer, application framework layer, system runtime library layer, and kernel layer (specifically, it may be the Linux kernel layer).
- the application layer includes various applications (including system applications and third-party applications) in the Android operating system.
- the application framework layer is the framework of the application. Developers can develop some applications based on the application framework layer while complying with the development principles of the application framework. For example, applications such as system setting applications, system chat applications, and system camera applications. Applications such as third-party settings applications, third-party camera applications, and third-party chat applications.
- the system runtime layer includes a library (also called a system library) and an Android operating system runtime environment.
- the library mainly provides various resources needed by the Android operating system.
- the Android operating system operating environment is used to provide a software environment for the Android operating system.
- the kernel layer is the operating system layer of the Android operating system and belongs to the lowest level of the Android operating system software level.
- the kernel layer is based on the Linux kernel to provide core system services and hardware-related drivers for the Android operating system.
- developers can develop a software program that implements the display control method provided by the embodiments of the present disclosure based on the system architecture of the Android operating system as shown in FIG.
- the control method can be run based on the Android operating system as shown in FIG. 1. That is, the processor or the terminal device can implement the display control method provided by the embodiment of the present disclosure by running the software program in the Android operating system.
- the display control method provided by the embodiment of the present disclosure will be described in detail below with reference to the flowchart of the display control method shown in FIG. 2.
- the logical sequence of the display control method provided by the embodiment of the present disclosure is shown in the method flowchart, in some cases, the steps shown or described may be performed in a different order than here.
- the display control method shown in FIG. 2 may include S201 and S202:
- a communication application program may be installed in the terminal device, and the communication application program may provide a communication window to support the chat of two or more communication objects (one communication object corresponds to one user).
- the foregoing first communication window may be a communication window of two or more communication objects.
- the two or more communication objects in the first communication window include the communication objects corresponding to the local user.
- the terminal device can send and receive communication content in the communication window.
- the types of communication content can include text, pictures (such as emoticons), links, audio, and video.
- the object sent and displayed by the terminal device in the communication window is the communication content in the communication window.
- the information interacted in the communication window is described as communication content, object, or display content in different positions, and different names are only used to distinguish clearly in different scenarios, and do not distinguish between interactions. The nature of the information has an impact.
- the user may perform the above-mentioned first input for the first communication window.
- the terminal provided in the embodiments of the present disclosure may have a touch screen, and the touch screen may be used to receive a user's input and display content corresponding to the input to the user in response to the input.
- the aforementioned first input may be touch screen input, fingerprint input, gravity input, key input, etc.
- the touch screen input is input by the user on the touch screen of the terminal, such as pressing input, long-press input, sliding input, click input, and floating input (input by the user near the touch screen).
- Fingerprint input is the input of the user's swipe fingerprint, long-press fingerprint, single-click fingerprint, and double-click fingerprint to the fingerprint reader of the terminal.
- Gravity input is the user's input of shaking the terminal in a specific direction or a certain number of times.
- the key input corresponds to the user's single-click operation, double-click operation, long-press operation, combination key operation and other operations on the terminal's power button, volume button, and home button.
- the embodiment of the present disclosure does not specifically limit the first input manner, and may be any achievable manner.
- the first input may be a user's specific input on the first communication window, such as a sliding input in the first communication window that is a circular arc; or, the first input may be a user's input to the first communication window. Long press to input a certain communication content in.
- the terminal device In response to the first input, displays at least one first control on the screen.
- At least one first control is associated with the display content of the first communication window, and each first control in the at least one first control corresponds to at least one object.
- the objects may be files such as emoticons or pictures.
- each first control in the at least one first control has object information, and at least one object corresponding to one first control is the object indicated by the object information of the first control. That is, the object information of a first control can be used as an index of at least one object corresponding to the first control.
- users may need to reply to different emoticons.
- the terminal device receives an electronic red envelope in the first communication window, that is, when the communication scene of the first communication window is a red envelope scene, the user may need to reply with an emoticon pack of "thank you boss".
- the display control method provided by the embodiment of the present disclosure can receive the first input of the user when the first communication window is displayed on the screen of the terminal device; and in response to the first input, display the connection with the first communication window on the screen. At least one first control associated with the content is displayed. Wherein, each first control in the at least one first control corresponds to at least one object, and the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- the terminal device can display at least one first control associated with the current display content in a small number, instead of directly displaying disorderly and large numbers Emoticons and other objects, so that the user can quickly and conveniently trigger the terminal device to acquire at least one object corresponding to the first target control, such as an emoticon that meets the needs of the user. Furthermore, the step of searching for the emoticon package multiple times in the disordered and large number of emoticon packages is avoided, that is, the step of searching for the emoticon package is simplified.
- the display control method provided in the embodiment of the present disclosure may further include S203 before S202, for example, it may further include S203 before S201:
- S203 The terminal device determines target scene information according to the display content of the first communication window.
- At least one first control corresponds to target scene information.
- the terminal device determines that the target scene information is obtained, it can use the control corresponding to the target scene information as at least one first control, that is, obtain at least one first control associated with the display content of the first communication window.
- the objects such as emoticons
- the object generally required by the user may be an object corresponding to the communication scene of the first communication window.
- the terminal device when the terminal device receives an electronic red envelope in the first communication window, that is, when the communication scene of the first communication window is a red envelope scene, the user may need to reply with an emoticon pack of "thank you boss".
- the foregoing target scene information is determined in real time by the terminal device, or determined in real time by a server interacting with the terminal device.
- the display content of the first communication window includes a communication title (denoted as information 1). That is, the terminal device determines the target scene information according to information 1.
- the title indicated by the title information of the first communication window may be "XX company", “XX department”, “XX unit”, “XX software”, or "XX development”.
- the terminal device may determine different target scene information according to different communication titles of the communication window.
- the terminal device determines that the communication scene indicated by the target information is "work scene", and the pop-up box of this scene can contain Controls such as "Announcement Notification”, “Welcome Newcomers”, “Help”, and "Suggestion” are at least one first control.
- the terminal device determines that the communication scene indicated by the target scene information is a "technical scene", and the pop-up box can contain "big cow With “, “666", “Like” and other controls, these controls are at least one first control.
- the display content of the first communication window includes communication content (denoted as information 2). That is, the terminal device determines the target scene information according to the information 2.
- the communication content in the first communication window may specifically be keywords included in the communication content, or the communication content itself, or the content type of the communication content (such as text, picture, audio, video, link, etc.).
- the terminal device determines that the communication scene indicated by the target scene information is a "red envelope scene".
- the pop-up box may contain controls such as "thank you boss”, “a red envelope is so small”, and "a point is also love", and these controls are at least one first control.
- the communication content selected by the user through the first input may be information that the user needs to reply.
- the terminal device determines that the communication scene indicated by the target scene information is a "voting scene”.
- the pop-up box may contain controls such as "please vote”, “hurry up”, and “have voted", and these controls are at least one first control.
- the terminal device can determine whether the communication content is a voting link by judging whether the communication content contains the text "voting".
- the terminal device determines that the communication scene indicated by the target scene information is a "news scene”.
- the pop-up box may contain controls such as "you are out”, “is it true or false”, “surrounding” and “eating melon", and these controls are at least one first control.
- the terminal device determines that the communication scene indicated by the target scene information is a "fighting scene".
- the pop-up box may contain controls such as "fear”, “who is afraid of”, and "come”, and these controls are at least one first control.
- the display content of the first communication window includes the communication object (denoted as information 3). That is, the terminal device determines the target scene information according to the information 3.
- the target information is the information of the communication object (denoted as the target chat object) who sends the communication content (denoted as the target communication content) in the first communication window.
- the target communication content may be the last communication content in the first communication window by default, or one or more communication contents operated by the user through the first input.
- the information of the target chat object may indicate at least one of the name of the target chat object, the group to which the target chat object belongs in the address book, and the tag of the target chat object.
- the terminal device can determine the relationship between the user corresponding to the target chat object (denoted as the target user) and the local user based on the information of the target chat object. For example, the terminal device judges the relationship with itself according to the name or unit or group of the target general object, and finally determines that the communication scene indicated by the target scene information is "family scene", "friend scene”, “colleague scene”, “classmate scene” Wait.
- the terminal device determines that the communication scene indicated by the target scene information is a "family scene”.
- the pop-up box may contain controls such as "pay attention to the body” and "I miss you", and these controls are at least one first control.
- the terminal device determines that the communication scene indicated by the target scene information is a "friend scene”.
- the pop-up box may contain controls such as "drinking”, “wave out”, and “open black”, and these controls are at least one first control.
- the target scenario information is predefined. That is, the terminal device determines the predefined scene information as the target scene information.
- the pop-up box corresponding to the predefined target scene information may include controls such as “are you there", “have you eaten”, and "hello", and these controls are at least one first control.
- the association relationship between the scene information and the corresponding at least one first control is preset by the system, and the scene information is a common communication scene in life, such as the “work scene” and “red envelope scene” in the above example, corresponding to
- the object information of the first control in the scene information may be common words in the corresponding scene.
- the terminal device can automatically determine the target scene information corresponding to the current first communication window, and obtain at least one first control corresponding to the target scene information, and then obtain each At least one object (ie at least one emoticon package) corresponding to the first control.
- the terminal device can display at least one first control associated with the display content of the first communication window on the screen, and it is beneficial to improve the degree to which the emoticon package and other objects provided by the terminal device meet the needs of the user.
- the terminal device when the user needs to view at least one object corresponding to each first control, the terminal device can be triggered to display the corresponding object on the screen, and then perform selection input on these objects.
- S204 and S205 may also be included after the above S202:
- the terminal device receives a second input from the user to the first target control, where the first target control is at least one control in the first control.
- the description of the input form of the second input can refer to the above-mentioned related description of the input form of the first input, which is not repeated here.
- the second input is a click input of the user on the first target control.
- the object information of the first target control may be used as an index of at least one object corresponding to the first target control.
- the at least one object corresponding to the first target control may be an object including the object information of the first target control, or an object whose title (or label) is the object information of the first target control.
- the terminal device In response to the second input, the terminal device displays at least one object corresponding to the first target control on the screen.
- the terminal device when the terminal device receives the electronic red envelope in the first communication window, that is, when the communication scene of the first communication window is the red envelope scene, the object information of the first target control may be "thank you boss", and the first control corresponds to At least one object of is an emoticon package corresponding to "Thank you boss”.
- the first communication window 31 displayed on the screen of the terminal device includes the title "Internet technology exchange group", the identification of the communication partner "a big cow” and the communication content "I published in the forum Wrote a technical blog”.
- the screen of the terminal device displays "Big Brothers Asking for Band” Controls, "666" controls and “Like” controls.
- the object information of the "Big guy asks for band” control is big guy asks for band
- the object information of "666” control is 666
- the object information of "like” control is likes.
- the communication scene indicated by the target scene information of the first communication window 31 may be a technical scene, and the at least one control corresponding to the target scene information is the above-mentioned "big guy asks for band” control, "666" control, and "like". Control.
- the emoticon package 311, the emoticon package 312, and the emoticon package 313 corresponding to the "big brother ask for belt” control may be displayed on the screen of the terminal device. That is, the emoticon package 311, the emoticon package 312, and the emoticon package 313 are at least one object corresponding to the first target control.
- the terminal device as shown in FIG. 3(d) can successfully send and display the emoticon package 311 in the first communication window 31 .
- the emoticon package 311 is an emoticon package that meets the technical scene.
- the first communication window 41 displayed on the screen of the terminal device includes the title "red envelope scene", the identifier of the communication object "small v", and the electronic red envelope “Gong Xi Fa Cai, good luck”. 1. Communication content. After the user enters the electronic red envelope "Congratulations on the fortune, good luck" for a long time (i.e. the first input), the screen of the terminal device as shown in Figure 4 (b) displays the "Thank you boss” control, "One point is also "Love” control and "Red envelope is too small” control.
- the object information of "Thank you boss” control is thank you boss
- the object information of "One point is also love” control is one point is also love
- the object information of "Red envelope is too small” control is The object information is that the red envelope is too small.
- the target scene information (denoted as scene information 2) of the first communication window 41 indicates the red envelope scene, and at least one control can be the aforementioned "thank you boss” control, "one point is also love” control, and "red envelope is too small” control .
- the screen of the terminal device is shown in Figure 4 (c)
- the emoticon package 411, emoticon package 412, and emoticon package 413 corresponding to the "thank you boss” control can be displayed on the screen. That is, the emoticon package 411, the emoticon package 412, and the emoticon package 413 are at least one object corresponding to the first target control.
- the terminal device as shown in (d) in FIG. 4 can successfully send and display the emoticon package in the first communication window 41 413.
- the emoticon package 413 is an emoticon package corresponding to the target scene information indicating the red envelope scene.
- the user's second input to the first target control in at least one first control can trigger the terminal device to display at least one object corresponding to the first target control, such as Display emoticons and other objects that meet user needs. Therefore, the terminal device can quickly display the emoticon package and other objects required by the user, which simplifies the step of searching for the emoticon package and other objects.
- the terminal device searches for the first scene information corresponding to the display content of the first communication window in a preset scene information database according to the display content of the first communication window.
- the preset scene information database includes at least one scene information (for example, including first scene information), at least one scene information in each scene information corresponding to the communication window display content information, and at least one scene information in each scene Information of at least one first control corresponding to the information (ie, object information of each first control in the at least one first control).
- the terminal device obtains the display content of the current communication window (such as the first communication window), and can use the display content as an index to find the scene information corresponding to the display content (such as the first communication window) from the preset scene information database. Scene information), and obtain at least one first control corresponding to the scene information.
- the foregoing preset scene information database may be stored in a terminal device or a server interacting with the terminal device.
- the terminal device can provide the user with an entry to modify the information in the preset scene information library, so as to support the user to trigger the deletion, addition or modification of a certain scene information in the preset scene information library, as well as deleting, adding or modifying information corresponding to a certain scene.
- the setting application of the terminal device may provide an entry for modifying the information in the preset scene information database.
- the terminal device determines the first scene information as target scene information.
- the first scene information may be determined and saved by the terminal device or the server in advance according to the display content of the first communication window.
- S203c The terminal device determines the predefined scene information as the target scene information when the first scene information is not found.
- the terminal device determines the target scene information according to different methods with different priorities.
- method 1 instructs the terminal device to determine the target scene information according to information 1;
- method 2 is used to instruct the terminal device to determine target scene information according to information 2;
- method 3 is used to instruct the terminal device to determine target scene information according to information 3;
- method 4 is used to indicate The predefined scene information is determined as the target scene information.
- the terminal device first determines the target scene information in a high-priority manner, and if the determination fails, then determines the target scene information in a manner with the second highest priority.
- the terminal device fails to determine the target scene information by using the method 1 to the method 3, indicating that the scene information corresponding to the corresponding target information is not saved in the predefined scene information database. That is, if the terminal device fails to determine the target scene information according to the display content of the first communication window, the predefined scene information is determined as the target scene information.
- the terminal device can determine the target scene information in a variety of ways, even if the target scene information fails to be determined by a higher priority method, it can still be determined by a lower priority.
- the way to determine the target scene information Therefore, it can be ensured that the terminal device determines to obtain the target scene information of the current first communication window, and then obtains at least one first control corresponding to the target scene information.
- the display control method provided by the embodiment of the present disclosure may further include S206:
- S206 The terminal device displays the second control on the screen.
- the second control is used to indicate target scene information.
- the terminal device may display the second control in a preset position of the screen, such as the upper left corner or the upper right corner.
- the terminal device displays a control 32 in the upper right corner of the screen, and the current target scene information "technical scene” is displayed on the control 32.
- the terminal device displays a control 42 in the upper right corner of the screen, and the current target scene information "red envelope scene” is displayed on the control 42.
- S206 may be executed after S201, and the execution sequence of S202 and S206 is not specifically limited.
- the terminal device can execute S202 and S206 at the same time, that is, the terminal device can simultaneously display the second control and at least one first control on the screen.
- the display control method provided by the embodiment of the present disclosure may display the second control first, and then display at least one first control under the trigger of the user, that is, perform 206 first and then perform S202.
- the first input includes a first sub-input and a second sub-input
- S206 can be replaced with S206a
- S202 can be replaced with S202a:
- the terminal device In response to the first sub-input, the terminal device displays a second control on the screen.
- the terminal device In response to the second sub-input, the terminal device displays at least one first control on the screen.
- the first sub-input is an input for the first communication window
- the second sub-input is an input for the second control
- the second control is used to indicate target scene information.
- the description of the input form of the first sub-input and the second sub-input can refer to the above-mentioned related description of the input form of the first input, which is not repeated here.
- the first sub-input is the user's long-press input on the last piece of communication content in the first communication window
- the second sub-input is the user's long-press input on the second control
- the user can choose whether to trigger the terminal device to display the second control and/or at least one first control according to their own needs, which is beneficial to improve the human-computer interaction performance of the user in the process of finding objects such as emoticons using the terminal.
- the display control method provided by the embodiment of the present disclosure may further include S207 and S208 after the above S206:
- the terminal device receives a third input from the user to the second control.
- the third input includes the user's input to the control 32 shown in FIG. 3 and the modification input of at least one control corresponding to the target scene information indicated by the control 32 (ie, modification of the object corresponding to the at least one control).
- the terminal device modifies the first information, where the first information includes at least one of the following: target scene information, an object corresponding to the second target control, and the second target control is at least one control in the first control.
- the second control is an entry for modifying the target scene information and the object corresponding to the second target control.
- the user can modify the target scene information and the object corresponding to the second target control through the second control. If the user is not satisfied with the recommended first control, he can click on the second control to edit, which will trigger the The first control that you are satisfied with is added to the scene, and you can also trigger the deletion of the first control that is not needed to facilitate subsequent reuse.
- the user can edit the information of the second control by double-clicking the current second control, and manually specify the communication scene indicated by the target scene information and the first control contained in the corresponding pop-up box.
- the manually edited target scene information and the corresponding at least one first control can be saved in a system (such as a system of a terminal device), so that the terminal device can directly use the scene information next time.
- a second control may be displayed on the screen of the terminal device, and the second control is used to indicate target scene information, and is used to trigger modification of target scene information, at least one first control The object corresponding to the control in the control.
- the user can modify the current target scene information and at least one first control to information that meets the user's needs, so that the subsequent terminal device determines that the target scene information obtained is the scene information that meets the user's needs, so that the user's needs can be obtained.
- the object ie emoji
- the display control method provided by the embodiment of the present disclosure may further include step S209:
- the terminal device displays at least one first control on the screen, and the first communication window corresponds to the first The target scene information is different from the second target scene information corresponding to the second communication window.
- the description of the input form of the fourth input can refer to the above-mentioned related description of the input form of the first input, which is not repeated here.
- the first target scene information is scene information determined by the terminal device according to the display content of the first communication window, that is, the first target scene information is used in the actual communication scene of the display content in the first communication window.
- the second target scene information reference may be made to the description of the first target scene information, which will not be repeated in the embodiments of the present disclosure.
- the user can make a preset input (such as long-press input) to the second control displayed on the first communication window to fix the communication scene of the current window (that is, fix the current target scene information), or unlock and fix it.
- the user can also set a global fixed communication scene (that is, set scene information) in the system, and use a certain communication scene for all communication windows.
- the actual communication scene of the first communication window is “technical scene”, that is, the communication scene indicated by the target scene information is “technical scene”, and the actual communication scene of the second communication window is “news scene”. It is determined that the scene information of the second communication window is still the target scene information. Therefore, when the terminal device displays the second communication window, the terminal device displays controls such as "Big Niu Ask for Band”, “666", and “Like” based on the first input, instead of displaying "You are out” and "Yes”. Controls such as "True or False", “Watching” and “Eating Melon”.
- the display control method provided by the embodiment of the present disclosure may further include S210 after the foregoing S209:
- the terminal device keeps displaying the second control on the screen, the first target scene information corresponding to the first communication window and the second target scene corresponding to the second communication window The information is different.
- the terminal device keeps displaying the second control on the screen, which means that the terminal device fixes the scene information of the communication window as the target scene information, and does not change the communication scene indicated by the target scene information according to the change of the display content of the communication window .
- the user when the user requires that at least one first control displayed in the communication window of the terminal device does not change, the user can trigger the terminal device to fix the scene information of the communication window to achieve The same at least one first control is displayed in different communication windows. Therefore, it is beneficial to the human-computer interaction performance in the process of searching the emoticon package.
- the terminal device can be triggered to send the object (such as an emoticon package) in the first communication window.
- a set of scene-based selection boxes that is, a selection box that includes at least one first control
- the Use the image library of the browser application or the third-party emoticon library to search for the option name, and recommend a set of emoticons, which the user can select and send.
- the display control method provided by the embodiment of the present disclosure may trigger the object sending step through S211 and S212.
- the terminal device receives a fifth input of the user to the target object.
- the target object is an object in at least one object corresponding to the first target control.
- the description of the input form of the fifth input can refer to the above-mentioned related description of the input form of the first input, which is not repeated here.
- the fifth input is a double-click input of the user on the target object.
- the terminal device In response to the fifth input, the terminal device sends the target object in the first communication window.
- the target object displayed by the terminal device can be an object that meets the needs of the user, it is beneficial to reduce the user's triggering of the terminal device to search for objects such as emoticons and improve the convenience of sending objects.
- FIG. 5 it is a schematic diagram of a possible structure of a terminal device provided by an embodiment of the present disclosure.
- the terminal device 50 shown in FIG. 5 includes: a receiving module 51 and a display module 52; a receiving module 51 and a display module 52; a receiving module 51 for receiving a user when the first communication window is displayed on the screen of the terminal device
- the display module 52 is configured to display at least one first control on the screen in response to the first input received by the receiving module 51; wherein, at least one first control is associated with the display content of the first communication window, And each first control in the at least one first control corresponds to at least one object, and the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- the at least one first control corresponds to the target scene information
- the terminal device 50 further includes: a determining module 53; a determining module 53, for the display module 52 before displaying the at least one first control on the screen, according to the first communication window To determine the target scene information.
- the receiving module 51 is further configured to receive a second input from the user to the first target control after the display module 52 displays at least one first control on the screen, the first target control being a control in the at least one first control
- the display module 52 is also configured to display at least one object corresponding to the first target control on the screen in response to the second input received by the receiving module 51.
- the determining module 53 is specifically configured to search for the first scene information corresponding to the display content of the first communication window in a preset scene information database according to the display content of the first communication window; when the first scene information is found In the case of, the first scene information is determined as the target scene information; in the case where the first scene information is not found, the predefined scene information is determined as the target scene information.
- the display module 52 is also used to display a second control on the screen; wherein, the second control is used to indicate target scene information.
- the terminal device 50 further includes: a modification module 54; a receiving module 51, which is also used for the display module 52, after displaying the second control on the screen, receiving a third input from the user to the second control; and a modification module 54 for In response to the third input received by the receiving module 51, the first information is modified.
- the first information includes at least one of the following: target scene information, an object corresponding to a second target control, and the second target control is at least one of the first controls Control.
- the display module 52 is further configured to display at least one first control on the screen when the first communication window is switched to the second communication window and the fourth input for the second communication window is received.
- the first target scene information corresponding to a communication window is different from the second target scene information corresponding to the second communication window.
- the display module 52 is further configured to keep displaying the second control on the screen when the first communication window is switched to the second communication window after the second control is displayed on the screen.
- the first target scene information is different from the second target scene information corresponding to the second communication window.
- the first input includes a first sub-input and a second sub-input; the display module 52 is specifically configured to display a second control on the screen in response to the first sub-input; in response to the second sub-input, on the screen At least one first control is displayed; wherein the first sub-input is an input for the first communication window, the second sub-input is an input for the second control, and the second control is used to indicate target scene information.
- the terminal device 50 further includes: a sending module 55; a receiving module 51, which is also used to receive a user's fifth input to the target object; a sending module 55, which is used to respond to the fifth input received by the receiving module 51,
- the target object is sent in a communication window; wherein the target object is an object in at least one object corresponding to the first target control.
- the terminal device 50 provided in the embodiment of the present disclosure can implement the various processes implemented by the terminal device in the foregoing method embodiment, and to avoid repetition, details are not described herein again.
- the terminal device provided by the embodiment of the present disclosure can receive the first input of the user when the first communication window is displayed on the screen of the terminal device; and in response to the first input, display the display of the first communication window on the screen At least one first control associated with the content.
- each first control in the at least one first control corresponds to at least one object
- the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- the terminal device can display at least one first control associated with the current display content in a small number, instead of directly displaying disorderly and large numbers Emoticons and other objects, so that the user can quickly and conveniently trigger the terminal device to acquire at least one object corresponding to the first target control, such as an emoticon that meets the needs of the user. Furthermore, the step of searching for the emoticon package multiple times in the disordered and large number of emoticon packages is avoided, that is, the step of searching for the emoticon package is simplified.
- the terminal device 100 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106 , User input unit 107, interface unit 108, memory 109, processor 110, power supply 111 and other components.
- a radio frequency unit 101 includes but is not limited to: a radio frequency unit 101, a network module 102, an audio output unit 103, an input unit 104, a sensor 105, and a display unit 106 , User input unit 107, interface unit 108, memory 109, processor 110, power supply 111 and other components.
- the structure of the terminal device shown in FIG. 6 does not constitute a limitation on the terminal device, and the terminal device may include more or fewer components than shown in the figure, or a combination of certain components, or different components Layout.
- terminal devices include, but are not limited to, mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminal devices, wearable devices, and
- the user input unit 107 is configured to receive the first input of the user when the first communication window is displayed on the screen of the terminal device; the display unit 106 is configured to respond to the first input received by the user input unit 107, At least one first control is displayed on the screen; wherein, at least one first control is associated with the display content of the first communication window, and each first control in the at least one first control corresponds to at least one object, and the first communication
- the display content of the window includes at least one of the communication title, the communication content and the communication object.
- the terminal device provided by the embodiment of the present disclosure can receive the first input of the user when the first communication window is displayed on the screen of the terminal device; and in response to the first input, display the display of the first communication window on the screen At least one first control associated with the content.
- each first control in the at least one first control corresponds to at least one object
- the display content of the first communication window includes at least one of a communication title, a communication content, and a communication object.
- the terminal device can display at least one first control associated with the current display content in a small number, instead of directly displaying disorderly and large numbers Emoticons and other objects, so that the user can quickly and conveniently trigger the terminal device to acquire at least one object corresponding to the first target control, such as an emoticon that meets the needs of the user. Furthermore, the step of searching for the emoticon package multiple times in the disordered and large number of emoticon packages is avoided, that is, the step of searching for the emoticon package is simplified.
- the radio frequency unit 101 can be used for receiving and sending signals in the process of sending and receiving information or talking. Specifically, the downlink data from the base station is received and processed by the processor 110; Uplink data is sent to the base station.
- the radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio frequency unit 101 can also communicate with the network and other devices through a wireless communication system.
- the terminal device provides users with wireless broadband Internet access through the network module 102, such as helping users to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 103 can convert the audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into audio signals and output them as sounds. Moreover, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (for example, call signal reception sound, message reception sound, etc.).
- the audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 104 is used to receive audio or video signals.
- the input unit 104 may include a graphics processing unit (GPU) 1041 and a microphone 1042.
- the graphics processor 1041 is configured to monitor images of still pictures or videos obtained by an image capture device (such as a camera) in a video capture mode or an image capture mode. Data is processed.
- the processed image frame can be displayed on the display unit 106.
- the image frame processed by the graphics processor 1041 may be stored in the memory 109 (or other storage medium) or sent via the radio frequency unit 101 or the network module 102.
- the microphone 1042 can receive sound, and can process such sound into audio data.
- the processed audio data can be converted into a format that can be sent to a mobile communication base station via the radio frequency unit 101 for output in the case of a telephone call mode.
- the terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display panel 1061 according to the brightness of the ambient light.
- the proximity sensor can close the display panel 1061 and the display panel 1061 when the terminal device 100 is moved to the ear. / Or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three-axis), and can detect the magnitude and direction of gravity when it is stationary, and can be used to identify the posture of the terminal device (such as horizontal and vertical screen switching, related games) , Magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensor 105 can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, Infrared sensors, etc., will not be repeated here.
- the display unit 106 is used to display information input by the user or information provided to the user.
- the display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), etc.
- LCD Liquid Crystal Display
- OLED Organic Light-Emitting Diode
- the user input unit 107 may be used to receive inputted numeric or character information, and generate key signal input related to user settings and function control of the terminal device.
- the user input unit 107 includes a touch panel 1071 and other input devices 1072.
- the touch panel 1071 also called a touch screen, can collect user touch operations on or near it (for example, the user uses any suitable objects or accessories such as fingers, stylus, etc.) on the touch panel 1071 or near the touch panel 1071. operating).
- the touch panel 1071 may include two parts: a touch detection device and a touch controller.
- the touch detection device detects the user's touch position, detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and then sends it
- the processor 110 receives and executes the command sent by the processor 110.
- the touch panel 1071 can be realized by various types such as resistive, capacitive, infrared, and surface acoustic wave.
- the user input unit 107 may also include other input devices 1072.
- other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackball, mouse, and joystick, which will not be repeated here.
- the touch panel 1071 can be overlaid on the display panel 1061.
- the touch panel 1071 detects a touch operation on or near it, it is transmitted to the processor 110 to determine the type of the touch event.
- the type of event provides corresponding visual output on the display panel 1061.
- the touch panel 1071 and the display panel 1061 are used as two independent components to realize the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 can be integrated
- the implementation of the input and output functions of the terminal device is not specifically limited here.
- the interface unit 108 is an interface for connecting an external device with the terminal device 100.
- the external device may include a wired or wireless headset port, an external power source (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) port, video I/O port, headphone port, etc.
- the interface unit 108 can be used to receive input (for example, data information, power, etc.) from an external device and transmit the received input to one or more elements in the terminal device 100 or can be used to connect to the terminal device 100 and external Transfer data between devices.
- the memory 109 can be used to store software programs and various data.
- the memory 109 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data, phone book, etc.) created by the use of mobile phones.
- the memory 109 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the processor 110 is the control center of the terminal device. It uses various interfaces and lines to connect the various parts of the entire terminal device, runs or executes the software programs and/or modules stored in the memory 109, and calls data stored in the memory 109 , Perform various functions of the terminal equipment and process data, so as to monitor the terminal equipment as a whole.
- the processor 110 may include one or more processing units; optionally, the processor 110 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc.
- the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 110.
- the terminal device 100 may also include a power source 111 (such as a battery) for supplying power to various components.
- a power source 111 such as a battery
- the power source 111 may be logically connected to the processor 110 through a power management system, so as to manage charging, discharging, and power consumption through the power management system. Management and other functions.
- the terminal device 100 includes some functional modules not shown, which will not be repeated here.
- an embodiment of the present disclosure also provides a terminal device, including a processor 110, a memory 109, a computer program stored on the memory 109 and running on the processor 110, and the computer program is executed by the processor 110
- a terminal device including a processor 110, a memory 109, a computer program stored on the memory 109 and running on the processor 110, and the computer program is executed by the processor 110
- the embodiments of the present disclosure also provide a computer-readable storage medium on which a computer program is stored.
- a computer program is stored.
- the computer program is executed by a processor, each process of the foregoing method embodiment is implemented, and the same technical effect can be achieved. To avoid repetition, I won’t repeat them here.
- the computer-readable storage medium such as read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
- the technical solution of the present disclosure essentially or the part that contributes to the related technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk). ) Includes several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present disclosure.
- a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Information Transfer Between Computers (AREA)
- Telephone Function (AREA)
Abstract
一种显示控制方法及终端设备,该方法包括:在终端设备的屏幕上显示有第一通信窗口的情况下,终端设备接收用户的第一输入(S201);终端设备响应于第一输入,在屏幕上显示至少一个第一控件(S202);其中,至少一个第一控件与第一通信窗口的显示内容相关联,且至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
Description
相关申请的交叉引用
本申请主张在2019年08月21日在中国提交的中国专利申请号201910774620.3的优先权,其全部内容通过引用包含于此。
本公开实施例涉及通信技术领域,尤其涉及一种显示控制方法及终端设备。
在用户使用终端设备的社交应用程序聊天时,为了满足聊天的趣味性,用户通常需要使用一些有趣的表情包或图片进行聊天。其中,以表情包为例,在用户回复不同的聊天内容时,需求发送的表情包不同,即用户在不同通信场景中需求的表情包不同。
具体的,用户可以对通信窗口中的表情包控件进行操作,触发终端设备在屏幕上显示一个或多个表情包的列表,如收藏夹的表情包的列表,本地保存的表情包的列表,以及网络侧设备推荐的表情包的列表等。如此,用户可以从这些表情包的列表中选择需要的表情包,并触发终端设备在该通信窗口中发送该表情包。
然而,由于一个或多个表情包列表中一般包括大量的表情包,且这些表情包的排列是无序的,因此,导致用户可能需要来回多次翻看这些列表中的表情包,才能查找到用户需求的表情包。即,导致查找用户需求的表情包的过程繁琐且耗时。
发明内容
本公开实施例提供一种显示控制方法及终端设备,以解决查找用户需求的表情包的过程繁琐且耗时的问题。
为了解决上述技术问题,本公开实施例是这样实现的:
第一方面,本公开实施例提供一种显示控制方法,该方法包括:在终端设备的屏幕上显示有第一通信窗口的情况下,接收用户的第一输入;响应于第一输入,在屏幕上显示至 少一个第一控件;其中,至少一个第一控件与第一通信窗口的显示内容相关联,且至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
第二方面,本公开实施例还提供了一种终端设备,该终端设备包括:接收模块和显示模块;接收模块,用于在终端设备的屏幕上显示有第一通信窗口的情况下,接收用户的第一输入;显示模块,用于响应于接收模块接收的第一输入,在屏幕上显示至少一个第一控件;其中,至少一个第一控件与第一通信窗口的显示内容相关联,且至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
第三方面,本公开实施例提供了一种终端设备,包括处理器、存储器及存储在该存储器上并可在该处理器上运行的计算机程序,该计算机程序被该处理器执行时实现如第一方面所述的显示控制方法的步骤。
第四方面,本公开实施例提供了一种计算机可读存储介质,该计算机可读存储介质上存储计算机程序,该计算机程序被处理器执行时实现如第一方面所述的显示控制方法的步骤。
在本公开实施例中,在终端设备的屏幕上显示第一通信窗口的情况下,可以接收用户的第一输入;并响应于第一输入,在屏幕上显示与第一通信窗口的显示内容相关联的至少一个第一控件。其中,至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。也就是说,在用户需求查找表情包等对象的情况下,终端设备可以显示数量较少且与当前显示内容相关联的至少一个第一控件,而不会直接显示无序、且较多数量的表情包等对象,从而用户可以快速、便捷地触发终端设备获取与第一目标控件对应的至少一个对象,如符合用户需求的表情包。进而,避免了用户在无序、且较多数量的表情包中来回多次查找表情包的步骤,即简化了查找表情包的步骤。
图1为本公开实施例提供的一种可能的安卓操作系统的架构示意图;
图2为本公开实施例提供的一种显示控制方法的流程示意图;
图3为本公开实施例提供的终端设备显示内容的示意图之一;
图4为本公开实施例提供的终端设备显示内容的示意图之二;
图5为本公开实施例提供的一种可能的终端设备的结构示意图;
图6为本公开实施例提供的一种终端设备的硬件结构示意图。
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
需要说明的是,本文中的“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。“多个”是指两个或多于两个。
需要说明的是,本公开实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本公开实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本公开的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一输入和第二输入等是用于区别不同的输入,而不是用于描述输入的特定顺序。
本公开实施例提供的显示控制方法,在终端设备的屏幕上显示第一通信窗口的情况下,可以接收用户的第一输入;并响应于第一输入,在屏幕上显示与第一通信窗口的显示内容相关联的至少一个第一控件。其中,至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。也就是说,在用户需求查找表情包等对象的情况下,终端设备可以显示数量较少且与当前显 示内容相关联的至少一个第一控件,而不会直接显示无序、且较多数量的表情包等对象,从而用户可以快速、便捷地触发终端设备获取与第一目标控件对应的至少一个对象,如符合用户需求的表情包。进而,避免了用户在无序、且较多数量的表情包中来回多次查找表情包的步骤,即简化了查找表情包的步骤。
本公开实施例中的终端设备可以为移动终端设备,也可以为非移动终端设备。移动终端设备可以为手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本或者个人数字助理(personal digital assistant,PDA)等;非移动终端设备可以为个人计算机(personal computer,PC)、电视机(television,TV)、柜员机或者自助机等;本公开实施例不作具体限定。
需要说明的是,本公开实施例提供的显示控制方法,执行主体可以为终端设备,或者,该终端设备的中央处理器(Central Processing Unit,CPU),或者该终端设备中的用于执行显示控制方法的控制模块。本公开实施例中以终端设备执行显示控制方法为例,说明本公开实施例提供的显示控制方法。
本公开实施例中的终端设备可以为具有操作系统的终端设备。该操作系统可以为安卓(Android)操作系统,可以为ios操作系统,还可以为其他可能的操作系统,本公开实施例不作具体限定。
下面以安卓操作系统为例,介绍一下本公开实施例提供的显示控制方法所应用的软件环境。
如图1所示,为本公开实施例提供的一种可能的安卓操作系统的架构示意图。在图1中,安卓操作系统的架构包括4层,分别为:应用程序层、应用程序框架层、系统运行库层和内核层(具体可以为Linux内核层)。
其中,应用程序层包括安卓操作系统中的各个应用程序(包括系统应用程序和第三方应用程序)。
应用程序框架层是应用程序的框架,开发人员可以在遵守应用程序的框架的开发原则的情况下,基于应用程序框架层开发一些应用程序。例如,系统设置应用、系统聊天应用和系统相机应用等应用程序。第三方设置应用、第三方相机应用和第三方聊天应用等应用 程序。
系统运行库层包括库(也称为系统库)和安卓操作系统运行环境。库主要为安卓操作系统提供其所需的各类资源。安卓操作系统运行环境用于为安卓操作系统提供软件环境。
内核层是安卓操作系统的操作系统层,属于安卓操作系统软件层次的最底层。内核层基于Linux内核为安卓操作系统提供核心系统服务和与硬件相关的驱动程序。
以安卓操作系统为例,本公开实施例中,开发人员可以基于上述如图1所示的安卓操作系统的系统架构,开发实现本公开实施例提供的显示控制方法的软件程序,从而使得该显示控制方法可以基于如图1所示的安卓操作系统运行。即处理器或者终端设备可以通过在安卓操作系统中运行该软件程序实现本公开实施例提供的显示控制方法。
下面结合图2所示的显示控制方法的流程图对本公开实施例提供的显示控制方法进行详细描述。其中,虽然在方法流程图中示出了本公开实施例提供的显示控制方法的逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。例如,图2中示出的显示控制方法可以包括S201和S202:
S201、在终端设备的屏幕上显示第一通信窗口的情况下,终端设备接收用户的第一输入。
其中,终端设备中可以安装有通信应用程序,该通信应用程序可以提供通信窗口,以支持两个或多个通信对象(一个通信对象对应一个用户)聊天。示例性的,上述第一通信窗口可以为两个或多个通信对象的通信窗口。当然,第一通信窗口中的两个或多个通信对象中包括本机用户对应的通信对象。
另外,终端设备可以在通信窗口中收发通信内容,通信内容的类型可以包括文本、图片(如表情包)、链接、音频和视频等。具体的,本公开实施例以下,终端设备在通信窗口(如第一通信窗口)中发送并显示的对象为该通信窗口中的通信内容。
需要说明的是,本公开实施例中,对通信窗口中交互的信息在不同的位置描述为通信内容、对象或显示内容,不同的名称仅为了在不同场景中描述清楚而进行区分,并不对交互的信息的本质造成影响。
可以理解的是,在用户回复当前第一通信窗口中的通信内容(如最后一条通信内容)的场景中,用户可以针对第一通信窗口执行上述第一输入。
需要说明的是,本公开实施例提供的终端可以具有触控屏,该触控屏可以用于接收用户的输入,并响应于该输入向该用户显示该输入对应的内容。其中,上述第一输入可以为触屏输入、指纹输入、重力输入、按键输入等。触屏输入为用户对终端的触控屏的按压输入、长按输入、滑动输入、点击输入、悬浮输入(用户在触控屏附近的输入)等输入。指纹输入为用户对终端的指纹识别器的滑动指纹、长按指纹、单击指纹和双击指纹等输入。重力输入为用户对终端特定方向的晃动、特定次数的晃动等输入。按键输入对应于用户对终端的电源键、音量键、Home键等按键的单击操作、双击操作、长按操作、组合按键操作等操作。具体的,本公开实施例对第一输入的方式不作具体限定,可以为任一可实现的方式。
示例性的,第一输入可以为用户在第一通信窗口上的特定输入,如在第一通信窗口中的滑动轨迹为圆弧的滑动输入;或者,第一输入可以为用户对第一通信窗口中的某条通信内容的长按输入。
S202、终端设备响应于第一输入,在屏幕上显示至少一个第一控件。
其中,至少一个第一控件与第一通信窗口的显示内容相关联,且至少一个第一控件中的每个第一控件分别对应至少一个对象。
需要说明的是,本公开实施例中,对象可以为表情包或图片等文件。
其中,上述至少一个第一控件中的每个第一控件具有一个对象信息,一个第一控件对应的至少一个对象即为该第一控件的对象信息指示的对象。即一个第一控件的对象信息可以作为与该第一控件对应的至少一个对象的索引。
可以理解的是,在不同的通信场景中,用户可能需要回复不同的表情包。例如,终端设备在第一通信窗口中接收到电子红包时,即第一通信窗口的通信场景为红包场景时,用户可能需求回复“谢谢老板”的表情包。
本公开实施例提供的显示控制方法,在终端设备的屏幕上显示第一通信窗口的情况下,可以接收用户的第一输入;并响应于第一输入,在屏幕上显示与第一通信窗口的显示内容相关联的至少一个第一控件。其中,至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。也就是说,在用户需求查找表情包等对象的情况下,终端设备可以显示数量较少且与当前显 示内容相关联的至少一个第一控件,而不会直接显示无序、且较多数量的表情包等对象,从而用户可以快速、便捷地触发终端设备获取与第一目标控件对应的至少一个对象,如符合用户需求的表情包。进而,避免了用户在无序、且较多数量的表情包中来回多次查找表情包的步骤,即简化了查找表情包的步骤。
在一种可能的实现方式中,本公开实施例提供的显示控制方法,在上述S202之前还可以包括S203,如在S201之前还包括S203:
S203、终端设备根据第一通信窗口的显示内容,确定目标场景信息。
其中,至少一个第一控件对应于目标场景信息。
需要说明的是,终端设备确定得到目标场景信息,便可以将与目标场景信息对应的控件作为至少一个第一控件,即得到与第一通信窗口的显示内容相关联的至少一个第一控件。
可以理解的是,在第一通信窗口的通信场景不同的情况下,用户需求回复的对象(如表情包)通常不同。具体的,用户通常需求的对象可以为对应于第一通信窗口的通信场景的对象。
例如,在终端设备在第一通信窗口中接收到电子红包时,即第一通信窗口的通信场景为红包场景时,用户可能需求回复“谢谢老板”的表情包。
可选地,上述目标场景信息为终端设备实时确定的,或者,为与终端设备交互的服务器实时确定的。
示例性的,在本公开实施例提供的第一种应用场景中,第一通信窗口的显示内容包括通信标题(记为信息1)。即终端设备根据信息1确定目标场景信息。例如,第一通信窗口的标题信息指示的标题可以为“XX公司”、“XX部门”、“XX单位”、“XX软件”或“XX开发”等。具体的,终端设备可以根据通信窗口的不同通信标题,确定不同的目标场景信息。
a、例如,第一通信窗口的通信标题为“XX公司”、“XX部门”或“XX单位”,则终端设备判定目标信息指示的通信场景为“工作场景”,此场景的弹出框可包含“公告通知”、“欢迎新人”、“求助”、“建议”等控件,这些控件即为至少一个第一控件。
b、例如,第一通信窗口的通信标题为“XX软件”、“XX开发”类似标题,则终端设备判定目标场景信息指示的通信场景为“技术场景”,则弹出框可以包含“大牛求带”、 “666”、“赞”等控件,这些控件即为至少一个第一控件。
在本公开实施例提供的第二种应用场景中,第一通信窗口的显示内容包括通信内容(记为信息2)。即终端设备根据信息2确定目标场景信息。其中,第一通信窗口中的通信内容具体可以为该通信内容中包括的关键字,或者该通信内容本身,或者该通信内容的内容类型(如文本、图片、音频、视频和链接等)。
a、例如,用户通过第一输入选中的通信内容为电子红包,则终端设备判定目标场景信息指示的通信场景为“红包场景”。进而,弹出框可以包含“谢谢老板”、“红包那么小”和“一分也是爱”等控件,这些控件即为至少一个第一控件。其中,用户通过第一输入选中的通信内容可以为用户需要回复的信息。
b、例如,用户通过第一输入选中的通信内容为投票链接,则终端设备判定目标场景信息指示的通信场景为“投票场景”。进而,弹出框可以包含“请投票”、“快点”和“已投”等控件,这些控件即为至少一个第一控件。
示例性的,终端设备可以通过判断通信内容中是否包含文字“投票”,以确定通信内容是否为投票链接。
c、例如,用户通过第一输入选中的通信内容为新闻链接,则终端设备判定目标场景信息指示的通信场景为“新闻场景”。进而,弹出框可以包含“你out了”、“是真是假”、“围观”和“吃瓜”等控件,这些控件即为至少一个第一控件。
d、例如,用户通过第一输入选中的通信内容为表情或图片,则终端设备判定目标场景信息指示的通信场景为“斗图场景”。进而,弹出框可以包含“怼”、“谁怕谁”和“来啊”等控件,这些控件即为至少一个第一控件。
在本公开实施例提供的第三种应用场景中,第一通信窗口的显示内容包括通信对象(记为信息3)。即终端设备根据信息3确定目标场景信息。具体的,目标信息为第一通信窗口中发送通信内容(记为目标通信内容)的通信对象(记为目标聊天对象)的信息。
可选地,目标通信内容可以为默认为第一通信窗口中的最后一条通信内容,或者为用户通过第一输入操作的一条或多条通信内容。
其中,目标聊天对象的信息可以指示目标聊天对象的名称、目标聊天对象在通讯录中所属的分组、目标聊天对象的标签中的至少一项。具体的,终端设备可以根据目标聊天对 象的信息,确定出目标聊天对象对应的用户(记为目标用户)与本机用户的关系。例如,终端设备依据目标通常对象的名称或者单位或者分组进行判断与本人的关系,最终判定目标场景信息指示的通信场景为“亲人场景”、“朋友场景”、“同事场景”、“同学场景”等。
a、例如,用户通过第一输入选中的通信内容的通信对象(即目标通信对象)对应的用户为本机用户的爸爸,则终端设备判定目标场景信息指示的通信场景为“亲人场景”。进而,弹出框可以包含“注意身体”和“我想你了”等控件,这些控件即为至少一个第一控件。
b、例如,用户通过第一输入选中的通信内容的通信对象(即目标通信对象)对应的用户为本机用户的朋友,则终端设备判定目标场景信息指示的通信场景为“朋友场景”。进而,弹出框可以包含“喝酒去”、“出来浪”和“开黑”等控件,这些控件即为至少一个第一控件。
进一步,可选地,在本公开实施例提供的第四种应用场景中,目标场景信息为预定义的。即终端设备将预定义场景信息确定为目标场景信息。
示例性的,预定义的目标场景信息对应的弹出框可以包括“在吗”、“吃饭了吗”和“你好”等控件,这些控件即为至少一个第一控件。
可以理解的是,场景信息与对应的至少一个第一控件的关联关系均由系统预置,场景信息为生活中常用通信场景,例如上述例子中的“工作场景”、“红包场景”等,对应于场景信息的第一控件的对象信息可以为相应场景中的常用语。
需要说明的是,本公开实施例中,在目标场景信息根据当前聊天窗口中显的显示内容确定的情况下,针对显示内容包括信息1-信息3中的多项的描述,可以参照上述实施例中对第一种场景信息到第三种应用场景的相关描述,本公开实施例对此不再赘述。
需要说明的是,本公开实施例提供的显示控制方法,终端设备可以自动地确定当前第一通信窗口对应的目标场景信息,并得到对应于目标场景信息的至少一个第一控件,进而可以得到每个第一控件对应的至少一个对象(即至少一个表情包)。如此,使得终端设备可以在屏幕上显示与第一通信窗口的显示内容相关联的至少一个第一控件,并有利于提高终端设备提供的表情包等对象符合用户需求的程度。
在本公开实施例提供的显示控制方法,在用户需求查看每个第一控件对应的至少一个对象的情况下,可以触发终端设备在屏幕上显示相应的对象,进而对这些对象执行选择输入。具体的,在上述S202之后还可以包括S204和S205:
S204、终端设备接收用户对第一目标控件的第二输入,第一目标控件为至少一个第一控件中的控件。
类似的,对第二输入的输入形式的描述可以参照上述对第一输入的输入形式的相关描述,这里不再赘述。示例性的,第二输入为用户对第一目标控件的点击输入。
其中,第一目标控件的对象信息可以作为与第一目标控件对应的至少一个对象的索引。
可以理解的是,与第一目标控件对应的至少一个对象,可以为包括第一目标控件的对象信息的对象,或者标题(或标签)为第一目标控件的对象信息的对象。
S205、终端设备响应于第二输入,在屏幕上显示与第一目标控件对应的至少一个对象。
示例性的,在终端设备在第一通信窗口中接收到电子红包时,即第一通信窗口的通信场景为红包场景时,第一目标控件的对象信息可以为“谢谢老板”,第一控件对应的至少一个对象为与“谢谢老板”对应的表情包。
以下通过图3和图4中的示例,对本公开实施例提供的显示控制方法示例说明。
如图3中的(a)所示,终端设备的屏幕上显示的第一通信窗口31中包括标题“互联网技术交流群”、通信对象的标识“某大牛”和通信内容“我在论坛发表了一篇技术博客”。在用户对通信内容“我在论坛发表了一篇技术博客”的长按输入(即第一输入)之后,如图3中的(b)所示终端设备的屏幕上显示“大佬求带”控件、“666”控件和“赞”控件,其中,“大佬求带”控件的对象信息为大佬求带,“666”控件的对象信息为666,“赞”控件的对象信息为赞。
具体的,第一通信窗口31的目标场景信息指示的通信场景可以为技术场景,对应于该目标场景信息的至少一个控件即为上述“大佬求带”控件、“666”控件和“赞”控件。
进一步的,在“大佬求带”控件为第一目标控件的情况下,用户对“大佬求带”控件的点击输入(即第二输入)之后,如图3中的(c)所示终端设备的屏幕上可以显示与“大佬求带”控件对应的表情包311、表情包312、表情包313。即表情包311、表情包312、表情包313为与第一目标控件对应的至少一个对象。
进一步的,在用户对图3中(c)示出的表情包311进行点击输入之后,如图3中的(d)所示终端设备可以在第一通信窗口31中成功发送并显示表情包311。显然,表情包311为符合技术场景的表情包。
如图4中的(a)所示,终端设备的屏幕上显示的第一通信窗口41中包括标题“红包场景”、通信对象的标识“小v”和电子红包“恭喜发财,大吉大利”这一通信内容。在用户对电子红包“恭喜发财,大吉大利”进行长按输入(即第一输入)之后,如图4中的(b)所示终端设备的屏幕上显示“谢谢老板”控件、“一分也是爱”控件和“红包太小了”控件,其中,“谢谢老板”控件的对象信息为谢谢老板,“一分也是爱”控件的对象信息为一分也是爱,“红包太小了”控件的对象信息为红包太小了。
具体的,第一通信窗口41的目标场景信息(记为场景信息2)指示红包场景,至少一个控件可以为上述“谢谢老板”控件、“一分也是爱”控件和“红包太小了”控件。
进一步的,在“谢谢老板”控件为第一目标控件的情况下,用户对“谢谢老板”控件进行点击输入(即第二输入)之后,如图4中的(c)所示终端设备的屏幕上可以显示与“谢谢老板”控件对应的表情包411、表情包412、表情包413。即,表情包411、表情包412、表情包413为与第一目标控件对应的至少一个对象。
进一步的,在用户对图4中的(c)示出的表情包413进行点击输入之后,如图4中的(d)所示终端设备可以在第一通信窗口41中成功发送并显示表情包413。显然,表情包413为对应于指示红包场景的目标场景信息的表情包。
需要说明的是,本公开实施例提供的显示控制方法,用户对至少一个第一控件中的第一目标控件的第二输入,可以触发终端设备显示与第一目标控件对应的至少一个对象,如显示符合用户需求的表情包等对象。从而,可以使得终端设备快速显示用户需求的表情包等对象,简化了查找表情包等对象的步骤。
在一种可能的实现方式中,本公开实施例提供的显示控制方法,上述S203可以通过S203a-S203c:
S203a、终端设备根据第一通信窗口的显示内容,在预设场景信息库中查找与第一通信窗口的显示内容对应的第一场景信息。
其中,预设场景信息库中包括至少一个场景信息(如包括第一场景信息),至少一个 场景信息中每个场景信息对应的通信窗口的显示内容的信息,以及至少一个场景信息中每个场景信息对应的至少一个第一控件的信息(即至少一个第一控件中每个第一控件的对象信息)。
可以理解的是,终端设备获取当前通信窗口(如第一通信窗口)的显示内容,可以以该显示内容为索引,从预设场景信息库中查找到该显示内容对应的场景信息(如第一场景信息),并得到对应于该场景信息的至少一个第一控件。
可选地,上述预设场景信息库可以保存在终端设备中,或与终端设备交互的服务器中。
进一步,终端设备可以向用户提供修改预设场景信息库中的信息的入口,以支持用户触发在预设场景信息库中删除、添加或修改某个场景信息,以及删除、添加或修改对应于某个场景信息的某些第一控件等。
可选地,终端设备的设置应用程序中可以提供修改预设场景信息库中的信息的入口。
S203b、终端设备在查找到第一场景信息的情况下,将第一场景信息确定为目标场景信息。
可以理解的是,第一场景信息可以为终端设备或服务器预先根据第一通信窗口的显示内容确定并保存的。
S203c、终端设备在未查找到第一场景信息的情况下,将预定义场景信息确定为目标场景信息。
可选地,本公开实施例中,终端设备根据不同方式确定目标场景信息的优先级不同。
例如,以下用于确定目标场景信息的方式的优先级依次降低:方式1、方式2、方式3和方式4。
其中,方式1指示终端设备根据信息1确定目标场景信息;方式2用于指示终端设备根据信息2确定目标场景信息;方式3用于指示终端设备根据信息3确定目标场景信息;方式4用于指示将预定义场景信息确定为目标场景信息。
具体的,终端设备先按照高优先级的方式确定目标场景信息,确定失败的情况下,再根据优先级次高的方式确定目标场景信息。
可以理解的是,终端设备采用方式1-方式3确定目标场景信息失败,说明预定义场景信息库中未保存相应目标信息对应的场景信息。即终端设备根据第一通信窗口的显示内容 确定目标场景信息失败的情况下,将预定义场景信息确定为目标场景信息。
需要说明的是,本公开实施例提供的显示控制方法,由于终端设备可以通过多种方式确定目标场景信息,因此即使通过优先级较高的方式确定目标场景信息失败,还可以通过优先级较低的方式确定目标场景信息。从而,可以保证终端设备确定得到当前第一通信窗口的目标场景信息,进而得到对应于目标场景信息的至少一个第一控件。
在一种可能的实现方式中,本公开实施例提供的显示控制方法,还可以包括S206:
S206、终端设备在屏幕上显示第二控件。
其中,第二控件用于指示目标场景信息。
可选地,终端设备可以在屏幕的预设位置,如左上角或右上角上显示第二控件。
示例性的,如图3所示,终端设备在屏幕的右上角显示有控件32,该控件32上显示有当前的目标场景信息“技术场景”。如图4所示,终端设备在屏幕的右上角显示有控件42,该控件42上显示有当前的目标场景信息“红包场景”。
本公开实施例中,S206可以在S201之后执行,对S202和S206的执行顺序不作具体限制。例如。终端设备可以同时执行S202和S206,即终端设备可以在屏幕上同时显示第二控件和至少一个第一控件。
可选地,本公开实施例提供的显示控制方法,可以先显示第二控件,再在用户的触发下显示至少一个第一控件,即先执行206再执行S202。具体的,第一输入包括第一子输入和第二子输入,S206可以替换为S206a,S202可以替换为S202a:
S206a、终端设备响应于第一子输入,在屏幕上显示第二控件。
S202a、终端设备响应于第二子输入,在屏幕上显示至少一个第一控件。
其中,第一子输入为针对第一通信窗口的输入,第二子输入为针对第二控件的输入,第二控件用于指示目标场景信息。
类似的,对第一子输入和第二子输入的输入形式的描述可以参照上述对第一输入的输入形式的相关描述,这里不再赘述。
示例性的,第一子输入为用户对第一通信窗口中的最后一条通信内容的长按输入,第二子输入为用户对第二控件的长按输入。
如此,用户可以根据自身需求选择是否触发终端设备显示第二控件和/或至少一个第 一控件,有利于提高用户使用终端查找表情包等对象过程中的人机交互性能。
进一步的,本公开实施例提供的显示控制方法,在上述S206之后还可以包括S207和S208:
S207、终端设备接收用户对第二控件的第三输入。
类似的,对第三输入的输入形式的描述可以参照上述对第一输入的输入形式的相关描述,这里不再赘述。
示例性的,第三输入包括用户对图3所示的控件32的输入,以及对控件32指示的目标场景信息相应的至少一个控件的修改输入(即对至少一个控件对应的对象的修改)。
S208、终端设备响应于第三输入,修改第一信息,第一信息包括以下至少一项:目标场景信息、第二目标控件对应的对象,第二目标控件为至少一个第一控件中的控件。
其中,第二控件为修改目标场景信息和第二目标控件对应的对象的入口。
可以理解的是,用户可通过第二控件对目标场景信息和第二目标控件对应的对象进行修改,若用户对已经推荐的第一控件不满意,可点击第二控件,进行编辑,将触发将自己满意的第一控件加入该场景中,也可以触发删除不需要的第一控件,以方便后续再次使用。
进一步,可选地,用户通过对当前的第二控件进行双击输入,以对第二控件的信息进行编辑,手动指定目标场景信息指示的通信场景,及相应的弹出框包含的第一控件。手动编辑的目标场景信息以及相应的至少一个第一控件可以保存在系统(如终端设备的系统)中,以方便下次终端设备可以直接使用该场景信息。
需要说明的是,本公开实施例提供的显示控制方法,终端设备的屏幕上可以显示第二控件,而第二控件用于指示目标场景信息,以及用于触发修改目标场景信息、至少一个第一控件中的控件对应的对象。从而,使得用户可以将当前目标场景信息以及至少一个第一控件修改为符合用户需求的信息,进而使得后续终端设备确定得到的目标场景信息为符合用户需求的场景信息,从而可以获取得到符合用户需求的对象(即表情包)。
在一种可能的实现方式中,本公开实施例提供的显示控制方法,还可以包括步骤S209:
S209、在第一通信窗口切换为第二通信窗口,且接收到针对第二通信窗口的第四输入的情况下,终端设备在屏幕上显示至少一个第一控件,第一通信窗口对应的第一目标场景信息与第二通信窗口对应的第二目标场景信息不同。
类似的,对第四输入的输入形式的描述可以参照上述对第一输入的输入形式的相关描述,这里不再赘述。
具体的,第一目标场景信息为终端设备根据第一通信窗口的显示内容确定得到的场景信息,即该第一目标场景信息用于第一通信窗口中的显示内容的实际通信场景。类似的,对第二目标场景信息的描述可以参照对第一目标场景信息的描述,本公开实施例中不再赘述。
可选地,用户可以对第一通信窗口上显示的第二控件进行预设输入(如长按输入),将固定当前窗口的通信场景(即固定当前的目标场景信息),也可以解锁固定。用户也可在系统中设置全局固定通信场景(即设置场景信息),针对于所有通信窗口都使用某个通信场景。
示例性的,第一通信窗口的实际通信场景为“技术场景”,即目标场景信息指示的通信场景为“技术场景”,第二通信窗口的实际通信场景为“新闻场景”,此时终端设备确定第二通信窗口的场景信息依旧为目标场景信息。从而,在终端设备显示第二通信窗口的情况下,终端设备基于第一输入显示“大牛求带”、“666”、“赞”等控件,而不会显示“你out了”、“是真是假”、“围观”和“吃瓜”等控件。
可选地,本公开实施例提供的显示控制方法,在上述S209之后,还可以包括S210:
S210、终端设备在第一通信窗口切换为第二通信窗口的情况下,在屏幕上保持显示第二控件,第一通信窗口对应的第一目标场景信息与第二通信窗口对应的第二目标场景信息不同。
其中,终端设备在屏幕上保持显示第二控件,指的是终端设备固定通信窗口的场景信息为目标场景信息,而不会根据通信窗口的显示内容的改变而改变目标场景信息所指示的通信场景。
需要说明的是,本公开实施例提供的显示控制方法,在用户需求终端设备在通信窗口中显示的至少一个第一控件不变时,用户可以触发终端设备固定通信窗口的场景信息,以实现在不同的通信窗口中显示相同的至少一个第一控件。从而,有利于查找表情包过程中的人机交互性能。
进一步的,用户对终端设备在屏幕上显示的至少一个对象中的某个对象进行输入(如 点击输入)之后,可以触发终端设备在第一通信窗口中发送该对象(如表情包)。
具体的,在用户对话聊天过程中,使用手指长按要互动的某位用户(即通信对象)发出的文字、语音、图片、表情或转发消息,或者,使用滑动手势,通过识别对话场景(即通信窗口的场景),弹出一组基于场景的选择框(即包括至少一个第一控件的选择框),用户可选中其中某一个选项(即一个第一控件)后,根据热度在表情库(可使用浏览器应用程序的图片库或者第三方表情库)中搜索选项名称,并推荐一组表情,用户可选择后发出。
可选地,本公开实施例提供的显示控制方法,可以通过S211和S212触发对象发送步骤。
S211、终端设备接收用户对目标对象的第五输入。
其中,目标对象为与第一目标控件对应的至少一个对象中的对象。
类似的,对第五输入的输入形式的描述可以参照上述对第一输入的输入形式的相关描述,这里不再赘述。示例性的,第五输入为用户对目标对象的双击输入。
S212、终端设备响应于第五输入,在第一通信窗口中发送所述目标对象。
如此,由于终端设备显示的目标对象可以为符合用户需求的对象,因此有利于减少用户触发终端设备查找表情包等对象的操作,并提高发送对象的便捷性。
如图5所示,为本公开实施例提供的一种终端设备可能的结构示意图。图5示出的终端设备50包括:接收模块51和显示模块52;接收模块51和显示模块52;接收模块51,用于在终端设备的屏幕上显示有第一通信窗口的情况下,接收用户的第一输入;显示模块52,用于响应于接收模块51接收的第一输入,在屏幕上显示至少一个第一控件;其中,至少一个第一控件与第一通信窗口的显示内容相关联,且至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
可选地,至少一个第一控件对应于目标场景信息;终端设备50还包括:确定模块53;确定模块53,用于显示模块52在屏幕上显示至少一个第一控件之前,根据第一通信窗口的显示内容,确定目标场景信息。
可选地,接收模块51,还用于显示模块52在屏幕上显示至少一个第一控件之后,接 收用户对第一目标控件的第二输入,第一目标控件为至少一个第一控件中的控件;显示模块52,还用于响应于接收模块51接收的第二输入,在屏幕上显示与第一目标控件对应的至少一个对象。
可选地,确定模块53,具体用于根据第一通信窗口的显示内容,在预设场景信息库中查找与第一通信窗口的显示内容对应的第一场景信息;在查找到第一场景信息的情况下,将第一场景信息确定为目标场景信息;在未查找到第一场景信息的情况下,将预定义场景信息确定为目标场景信息。
可选地,显示模块52,还用于在屏幕上显示第二控件;其中,第二控件用于指示目标场景信息。
可选地,终端设备50还包括:修改模块54;接收模块51,还用于显示模块52在屏幕上显示第二控件之后,接收用户对第二控件的第三输入;修改模块54,用于响应于接收模块51接收的第三输入,修改第一信息,第一信息包括以下至少一项:目标场景信息、与第二目标控件对应的对象,第二目标控件为至少一个第一控件中的控件。
可选地,显示模块52,还用于在第一通信窗口切换为第二通信窗口,且接收到针对第二通信窗口的第四输入的情况下,在屏幕上显示至少一个第一控件,第一通信窗口对应的第一目标场景信息与第二通信窗口对应的第二目标场景信息不同。
可选地,显示模块52,还用于在屏幕上显示第二控件之后,在第一通信窗口切换为第二通信窗口的情况下,在屏幕上保持显示第二控件,第一通信窗口对应的第一目标场景信息与第二通信窗口对应的第二目标场景信息不同。
可选地,第一输入包括第一子输入和第二子输入;显示模块52,具体用于响应于第一子输入,在屏幕上显示第二控件;响应于第二子输入,在屏幕上显示至少一个第一控件;其中,第一子输入为针对第一通信窗口的输入,第二子输入为针对第二控件的输入,第二控件用于指示目标场景信息。
可选地,终端设备50还包括:发送模块55;接收模块51,还用于接收用户对目标对象的第五输入;发送模块55,用于响应于接收模块51接收的第五输入,在第一通信窗口中发送目标对象;其中,目标对象为与第一目标控件对应的至少一个对象中的对象。
本公开实施例提供的终端设备50能够实现上述方法实施例中终端设备实现的各个过 程,为避免重复,这里不再赘述。
本公开实施例提供的终端设备,在终端设备的屏幕上显示第一通信窗口的情况下,可以接收用户的第一输入;并响应于第一输入,在屏幕上显示与第一通信窗口的显示内容相关联的至少一个第一控件。其中,至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。也就是说,在用户需求查找表情包等对象的情况下,终端设备可以显示数量较少且与当前显示内容相关联的至少一个第一控件,而不会直接显示无序、且较多数量的表情包等对象,从而用户可以快速、便捷地触发终端设备获取与第一目标控件对应的至少一个对象,如符合用户需求的表情包。进而,避免了用户在无序、且较多数量的表情包中来回多次查找表情包的步骤,即简化了查找表情包的步骤。
图6为本公开实施例提供的一种终端设备的硬件结构示意图,该终端设备100包括但不限于:射频单元101、网络模块102、音频输出单元103、输入单元104、传感器105、显示单元106、用户输入单元107、接口单元108、存储器109、处理器110、以及电源111等部件。本领域技术人员可以理解,图6中示出的终端设备结构并不构成对终端设备的限定,终端设备可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开实施例中,终端设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端设备、可穿戴设备、以及计步器等。
其中,用户输入单元107,用于在终端设备的屏幕上显示有第一通信窗口的情况下,接收用户的第一输入;显示单元106,用于响应于用户输入单元107接收的第一输入,在屏幕上显示至少一个第一控件;其中,至少一个第一控件与第一通信窗口的显示内容相关联,且至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
本公开实施例提供的终端设备,在终端设备的屏幕上显示第一通信窗口的情况下,可以接收用户的第一输入;并响应于第一输入,在屏幕上显示与第一通信窗口的显示内容相关联的至少一个第一控件。其中,至少一个第一控件中的每个第一控件分别对应至少一个对象,第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。也就是说,在用户需求查找表情包等对象的情况下,终端设备可以显示数量较少且与当前显示 内容相关联的至少一个第一控件,而不会直接显示无序、且较多数量的表情包等对象,从而用户可以快速、便捷地触发终端设备获取与第一目标控件对应的至少一个对象,如符合用户需求的表情包。进而,避免了用户在无序、且较多数量的表情包中来回多次查找表情包的步骤,即简化了查找表情包的步骤。
应理解的是,本公开实施例中,射频单元101可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器110处理;另外,将上行的数据发送给基站。通常,射频单元101包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元101还可以通过无线通信系统与网络和其他设备通信。
终端设备通过网络模块102为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元103可以将射频单元101或网络模块102接收的或者在存储器109中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元103还可以提供与终端设备100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元103包括扬声器、蜂鸣器以及受话器等。
输入单元104用于接收音频或视频信号。输入单元104可以包括图形处理器(Graphics Processing Unit,GPU)1041和麦克风1042,图形处理器1041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元106上。经图形处理器1041处理后的图像帧可以存储在存储器109(或其它存储介质)中或者经由射频单元101或网络模块102进行发送。麦克风1042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元101发送到移动通信基站的格式输出。
终端设备100还包括至少一种传感器105,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1061的亮度,接近传感器可在终端设备100移动到耳边时,关闭显示面板1061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别终端设备 姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器105还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元106用于显示由用户输入的信息或提供给用户的信息。显示单元106可包括显示面板1061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1061。
用户输入单元107可用于接收输入的数字或字符信息,以及产生与终端设备的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元107包括触控面板1071以及其他输入设备1072。触控面板1071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板1071上或在触控面板1071附近的操作)。触控面板1071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器110,接收处理器110发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1071。除了触控面板1071,用户输入单元107还可以包括其他输入设备1072。具体地,其他输入设备1072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板1071可覆盖在显示面板1061上,当触控面板1071检测到在其上或附近的触摸操作后,传送给处理器110以确定触摸事件的类型,随后处理器110根据触摸事件的类型在显示面板1061上提供相应的视觉输出。虽然在图6中,触控面板1071与显示面板1061是作为两个独立的部件来实现终端设备的输入和输出功能,但是在某些实施例中,可以将触控面板1071与显示面板1061集成而实现终端设备的输入和输出功能,具体此处不做限定。
接口单元108为外部装置与终端设备100连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元108可以用于接收来自外部装置的输入(例如,数据信息、电力 等等)并且将接收到的输入传输到终端设备100内的一个或多个元件或者可以用于在终端设备100和外部装置之间传输数据。
存储器109可用于存储软件程序以及各种数据。存储器109可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器109可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器110是终端设备的控制中心,利用各种接口和线路连接整个终端设备的各个部分,通过运行或执行存储在存储器109内的软件程序和/或模块,以及调用存储在存储器109内的数据,执行终端设备的各种功能和处理数据,从而对终端设备进行整体监控。处理器110可包括一个或多个处理单元;可选地,处理器110可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器110中。
终端设备100还可以包括给各个部件供电的电源111(比如电池),可选地,电源111可以通过电源管理系统与处理器110逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,终端设备100包括一些未示出的功能模块,在此不再赘述。
可选地,本公开实施例还提供一种终端设备,包括处理器110,存储器109,存储在存储器109上并可在所述处理器110上运行的计算机程序,该计算机程序被处理器110执行时实现上述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质,如只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非 排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。
Claims (20)
- 一种显示控制方法,所述方法包括:在终端设备的屏幕上显示有第一通信窗口的情况下,接收用户的第一输入;响应于所述第一输入,在所述屏幕上显示至少一个第一控件;其中,所述至少一个第一控件与所述第一通信窗口的显示内容相关联,且所述至少一个第一控件中的每个第一控件分别对应至少一个对象,所述第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
- 根据权利要求1所述的方法,其中,所述至少一个第一控件对应于目标场景信息;所述在所述屏幕上显示至少一个第一控件之前,所述方法还包括:根据所述第一通信窗口的显示内容,确定所述目标场景信息。
- 根据权利要求1或2所述的方法,其中,在所述屏幕上显示至少一个第一控件之后,所述方法还包括:接收用户对第一目标控件的第二输入,所述第一目标控件为所述至少一个第一控件中的控件;响应于所述第二输入,在所述屏幕上显示与所述第一目标控件对应的至少一个对象。
- 根据权利要求2所述的方法,其中,所述根据所述第一通信窗口的显示内容,确定所述目标场景信息,包括:根据所述第一通信窗口的显示内容,在预设场景信息库中查找与所述第一通信窗口的显示内容对应的第一场景信息;在查找到所述第一场景信息的情况下,将所述第一场景信息确定为所述目标场景信息;在未查找到所述第一场景信息的情况下,将预定义场景信息确定为所述目标场景信息。
- 根据权利要求2所述的方法,其中,所述方法还包括:在所述屏幕上显示第二控件;其中,所述第二控件用于指示所述目标场景信息。
- 根据权利要求5所述的方法,其中,所述在所述屏幕上显示第二控件之后,所述方法还包括:接收用户对所述第二控件的第三输入;响应于所述第三输入,修改第一信息,所述第一信息包括以下至少一项:所述目标场景信息、与第二目标控件对应的对象,所述第二目标控件为所述至少一个第一控件中的控件。
- 根据权利要求2所述的方法,其中,所述方法还包括:在所述第一通信窗口切换为第二通信窗口,且接收到针对所述第二通信窗口的第四输入的情况下,在所述屏幕上显示所述至少一个第一控件,所述第一通信窗口对应的第一目标场景信息与所述第二通信窗口对应的第二目标场景信息不同。
- 根据权利要求5所述的方法,其中,所述在所述屏幕上显示第二控件之后,所述方法还包括:在所述第一通信窗口切换为第二通信窗口的情况下,在所述屏幕上保持显示所述第二控件,所述第一通信窗口对应的第一目标场景信息与所述第二通信窗口对应的第二目标场景信息不同。
- 根据权利要求2所述的方法,其中,所述第一输入包括第一子输入和第二子输入;所述响应于所述第一输入,在所述屏幕上显示至少一个第一控件,包括:响应于所述第一子输入,在所述屏幕上显示第二控件;响应于所述第二子输入,在所述屏幕上显示所述至少一个第一控件;其中,所述第一子输入为针对所述第一通信窗口的输入,所述第二子输入为针对所述第二控件的输入,所述第二控件用于指示所述目标场景信息。
- 根据权利要求3所述的方法,其中,所述方法还包括:接收用户对目标对象的第五输入;响应于所述第五输入,在所述第一通信窗口中发送所述目标对象;其中,所述目标对象为与所述第一目标控件对应的至少一个对象中的对象。
- 一种终端设备,所述终端设备包括:接收模块和显示模块;所述接收模块,用于在终端设备的屏幕上显示有第一通信窗口的情况下,接收用户的第一输入;所述显示模块,用于响应于所述接收模块接收的所述第一输入,在所述屏幕上显示至少一个第一控件;其中,所述至少一个第一控件与所述第一通信窗口的显示内容相关联,且所述至少一个第一控件中的每个第一控件分别对应至少一个对象,所述第一通信窗口的显示内容包括通信标题、通信内容和通信对象中的至少一项。
- 根据权利要求11所述的终端设备,其中,所述至少一个第一控件对应于目标场景信息;所述终端设备还包括:确定模块;所述确定模块,用于所述显示模块在所述屏幕上显示至少一个第一控件之前,根据所述第一通信窗口的显示内容,确定所述目标场景信息。
- 根据权利要求11或12所述的终端设备,其中,所述接收模块,还用于所述显示模块在所述屏幕上显示至少一个第一控件之后,接收用户对第一目标控件的第二输入,所述第一目标控件为所述至少一个第一控件中的控件;所述显示模块,还用于响应于所述接收模块接收的所述第二输入,在所述屏幕上显示与所述第一目标控件对应的至少一个对象。
- 根据权利要求12所述的终端设备,其中,所述确定模块,具体用于根据所述第一通信窗口的显示内容,在预设场景信息库中查找与所述第一通信窗口的显示内容对应的第一场景信息;在查找到所述第一场景信息的情况下,将所述第一场景信息确定为所述目标场景信息;在未查找到所述第一场景信息的情况下,将预定义场景信息确定为所述目标场景信息。
- 根据权利要求12所述的终端设备,其中,所述显示模块,还用于在所述屏幕上显示第二控件;其中,所述第二控件用于指示所述目标场景信息。
- 根据权利要求15所述的终端设备,其中,所述终端设备还包括:修改模块;所述接收模块,还用于所述显示模块在所述屏幕上显示第二控件之后,接收用户对所述第二控件的第三输入;所述修改模块,用于响应于所述接收模块接收的所述第三输入,修改第一信息,所述第一信息包括以下至少一项:所述目标场景信息、与第二目标控件对应的对象,所述第二目标控件为所述至少一个第一控件中的控件。
- 根据权利要求12所述的终端设备,其中,所述显示模块,还用于在所述第一通信窗口切换为第二通信窗口,且接收到针对所述第二通信窗口的第四输入的情况下,在所述屏幕上显示所述至少一个第一控件,所述第一通信窗口对应的第一目标场景信息与所述第二通信窗口对应的第二目标场景信息不同。
- 根据权利要求13所述的终端设备,其中,所述终端设备还包括:发送模块;所述接收模块,还用于接收用户对目标对象的第五输入;所述发送模块,用于响应于所述接收模块接收的所述第五输入,在所述第一通信窗口中发送所述目标对象;其中,所述目标对象为与所述第一目标控件对应的至少一个对象中的对象。
- 一种终端设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至10中任一项所述的显示控制方法的步骤。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储计算机程序,所述计算机程序被处理器执行时实现如权利要求1至10中任一项所述的显示控制方法的步骤。
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| ES20855765T ES3026583T3 (en) | 2019-08-21 | 2020-06-29 | Display control method and terminal device |
| EP20855765.2A EP4020194B1 (en) | 2019-08-21 | 2020-06-29 | Display control method and terminal device |
| US17/672,455 US11989390B2 (en) | 2019-08-21 | 2022-02-15 | Display control method and terminal device |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201910774620.3A CN110609723B (zh) | 2019-08-21 | 2019-08-21 | 一种显示控制方法及终端设备 |
| CN201910774620.3 | 2019-08-21 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/672,455 Continuation US11989390B2 (en) | 2019-08-21 | 2022-02-15 | Display control method and terminal device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2021031701A1 true WO2021031701A1 (zh) | 2021-02-25 |
Family
ID=68890775
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2020/099039 Ceased WO2021031701A1 (zh) | 2019-08-21 | 2020-06-29 | 显示控制方法及终端设备 |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US11989390B2 (zh) |
| EP (1) | EP4020194B1 (zh) |
| CN (1) | CN110609723B (zh) |
| ES (1) | ES3026583T3 (zh) |
| WO (1) | WO2021031701A1 (zh) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114489400A (zh) * | 2022-01-13 | 2022-05-13 | 维沃移动通信有限公司 | 界面控制方法、装置、电子设备及介质 |
| EP4336355A4 (en) * | 2021-06-11 | 2024-08-21 | Beijing Zitiao Network Technology Co., Ltd. | INTERACTION METHOD AND APPARATUS, MEDIUM AND ELECTRONIC DEVICE |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110609723B (zh) * | 2019-08-21 | 2021-08-24 | 维沃移动通信有限公司 | 一种显示控制方法及终端设备 |
| CN113051427A (zh) * | 2019-12-10 | 2021-06-29 | 华为技术有限公司 | 一种表情制作方法和装置 |
| CN111625308B (zh) * | 2020-04-28 | 2022-02-11 | 北京字节跳动网络技术有限公司 | 一种信息展示方法、装置和电子设备 |
| CN121256077A (zh) * | 2020-05-09 | 2026-01-02 | 腾讯科技(深圳)有限公司 | 一种图像推荐方法、装置、客户端及存储介质 |
| US11502983B2 (en) * | 2020-06-08 | 2022-11-15 | Snap Inc. | Reply interface with selectable stickers for messaging system |
| CN112732389B (zh) * | 2021-01-19 | 2025-05-23 | 维沃移动通信有限公司 | 群消息的显示方法及装置 |
| CN112989077B (zh) * | 2021-03-10 | 2023-09-22 | 维沃移动通信有限公司 | 表情对象的管理方法和装置 |
| CN113589956A (zh) * | 2021-07-13 | 2021-11-02 | 北京快乐茄信息技术有限公司 | 常用语处理方法及装置、移动终端及存储介质 |
| CN116440503A (zh) * | 2023-03-28 | 2023-07-18 | 网易(杭州)网络有限公司 | 虚拟角色的互动方法、装置、电子设备及存储介质 |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104394057A (zh) * | 2013-11-04 | 2015-03-04 | 贵阳朗玛信息技术股份有限公司 | 表情推荐方法及装置 |
| CN104836726A (zh) * | 2015-04-01 | 2015-08-12 | 网易(杭州)网络有限公司 | 一种显示聊天表情的方法及装置 |
| US20160357402A1 (en) * | 2015-06-02 | 2016-12-08 | Facebook, Inc. | Methods and Systems for Providing User Feedback Using an Emotion Scale |
| CN106327342A (zh) * | 2016-08-17 | 2017-01-11 | 腾讯科技(深圳)有限公司 | 一种表情包的处理方法及终端 |
| CN109977409A (zh) * | 2019-03-28 | 2019-07-05 | 北京科技大学 | 一种基于用户聊天习惯的智能表情推荐方法和系统 |
| CN110336733A (zh) * | 2019-04-30 | 2019-10-15 | 上海连尚网络科技有限公司 | 一种呈现表情包的方法与设备 |
| CN110609723A (zh) * | 2019-08-21 | 2019-12-24 | 维沃移动通信有限公司 | 一种显示控制方法及终端设备 |
Family Cites Families (28)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR100782081B1 (ko) * | 2006-09-20 | 2007-12-04 | 삼성전자주식회사 | 휴대 단말기의 터치 스크린을 이용한 데이터 통신 방법 |
| CN102289339B (zh) * | 2010-06-21 | 2013-10-30 | 腾讯科技(深圳)有限公司 | 一种显示表情信息的方法及装置 |
| JP5413448B2 (ja) * | 2011-12-23 | 2014-02-12 | 株式会社デンソー | 表示システム、表示装置、及び操作装置 |
| CN103984494A (zh) * | 2013-02-07 | 2014-08-13 | 上海帛茂信息科技有限公司 | 用于多种设备间的直觉式用户互动系统及方法 |
| KR102057629B1 (ko) * | 2013-02-19 | 2020-01-22 | 엘지전자 주식회사 | 이동 단말기 및 이동 단말기의 제어 방법 |
| US10712936B2 (en) * | 2013-03-18 | 2020-07-14 | Lenovo (Beijing) Co., Ltd. | First electronic device and information processing method applicable to first or second electronic device comprising a first application |
| CN104780093B (zh) * | 2014-01-15 | 2018-05-01 | 阿里巴巴集团控股有限公司 | 即时通讯过程中的表情信息处理方法及装置 |
| CN104063683B (zh) * | 2014-06-06 | 2017-05-17 | 北京搜狗科技发展有限公司 | 一种基于人脸识别的表情输入方法和装置 |
| CN104076944B (zh) | 2014-06-06 | 2017-03-01 | 北京搜狗科技发展有限公司 | 一种聊天表情输入的方法和装置 |
| CN106201161B (zh) * | 2014-09-23 | 2021-09-03 | 北京三星通信技术研究有限公司 | 电子设备的显示方法及系统 |
| JP6461630B2 (ja) * | 2015-02-05 | 2019-01-30 | 任天堂株式会社 | 通信システム、通信装置、プログラム及び表示方法 |
| US20170118145A1 (en) * | 2015-10-21 | 2017-04-27 | Futurefly Ltd. | Method of using emoji to control and enrich 3d chat environments |
| CN106789543A (zh) | 2015-11-20 | 2017-05-31 | 腾讯科技(深圳)有限公司 | 会话中实现表情图像发送的方法和装置 |
| US11112963B2 (en) * | 2016-05-18 | 2021-09-07 | Apple Inc. | Devices, methods, and graphical user interfaces for messaging |
| US20180077096A1 (en) * | 2016-09-13 | 2018-03-15 | Mark A. DeMattei | Messaging environment for mobile device with multitask toolbar, search engine and keyboard control access to apps and centralized functionality |
| CN107038214A (zh) * | 2017-03-06 | 2017-08-11 | 北京小米移动软件有限公司 | 表情信息处理方法及装置 |
| US10529115B2 (en) * | 2017-03-20 | 2020-01-07 | Google Llc | Generating cartoon images from photos |
| CN107315488A (zh) * | 2017-05-31 | 2017-11-03 | 北京安云世纪科技有限公司 | 一种表情信息的搜索方法、装置和移动终端 |
| US10348659B1 (en) * | 2017-12-21 | 2019-07-09 | International Business Machines Corporation | Chat message processing |
| CN108255316B (zh) * | 2018-01-23 | 2021-09-10 | Oppo广东移动通信有限公司 | 动态调整表情符号的方法、电子装置及计算机可读存储介质 |
| CN108363536A (zh) | 2018-02-27 | 2018-08-03 | 维沃移动通信有限公司 | 一种表情包使用方法及终端设备 |
| CN108521366A (zh) * | 2018-03-27 | 2018-09-11 | 联想(北京)有限公司 | 表情推送方法和电子设备 |
| CN108595237A (zh) * | 2018-03-30 | 2018-09-28 | 维沃移动通信有限公司 | 一种显示内容的方法及终端 |
| CN108563378B (zh) * | 2018-04-25 | 2020-06-16 | 维沃移动通信有限公司 | 一种消息管理方法及终端 |
| CN108958593B (zh) * | 2018-08-02 | 2021-01-08 | 维沃移动通信有限公司 | 一种确定通讯对象的方法及移动终端 |
| CN109361814A (zh) * | 2018-09-25 | 2019-02-19 | 联想(北京)有限公司 | 一种控制方法及电子设备 |
| CN109828731B (zh) * | 2018-12-18 | 2022-04-15 | 维沃移动通信有限公司 | 一种搜索方法及终端设备 |
| CN109710753B (zh) | 2018-12-29 | 2021-08-03 | 北京金山安全软件有限公司 | 基于个性化主题的快捷信息生成方法、装置和电子设备 |
-
2019
- 2019-08-21 CN CN201910774620.3A patent/CN110609723B/zh active Active
-
2020
- 2020-06-29 WO PCT/CN2020/099039 patent/WO2021031701A1/zh not_active Ceased
- 2020-06-29 EP EP20855765.2A patent/EP4020194B1/en active Active
- 2020-06-29 ES ES20855765T patent/ES3026583T3/es active Active
-
2022
- 2022-02-15 US US17/672,455 patent/US11989390B2/en active Active
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104394057A (zh) * | 2013-11-04 | 2015-03-04 | 贵阳朗玛信息技术股份有限公司 | 表情推荐方法及装置 |
| CN104836726A (zh) * | 2015-04-01 | 2015-08-12 | 网易(杭州)网络有限公司 | 一种显示聊天表情的方法及装置 |
| US20160357402A1 (en) * | 2015-06-02 | 2016-12-08 | Facebook, Inc. | Methods and Systems for Providing User Feedback Using an Emotion Scale |
| CN106327342A (zh) * | 2016-08-17 | 2017-01-11 | 腾讯科技(深圳)有限公司 | 一种表情包的处理方法及终端 |
| CN109977409A (zh) * | 2019-03-28 | 2019-07-05 | 北京科技大学 | 一种基于用户聊天习惯的智能表情推荐方法和系统 |
| CN110336733A (zh) * | 2019-04-30 | 2019-10-15 | 上海连尚网络科技有限公司 | 一种呈现表情包的方法与设备 |
| CN110609723A (zh) * | 2019-08-21 | 2019-12-24 | 维沃移动通信有限公司 | 一种显示控制方法及终端设备 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP4020194A4 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP4336355A4 (en) * | 2021-06-11 | 2024-08-21 | Beijing Zitiao Network Technology Co., Ltd. | INTERACTION METHOD AND APPARATUS, MEDIUM AND ELECTRONIC DEVICE |
| CN114489400A (zh) * | 2022-01-13 | 2022-05-13 | 维沃移动通信有限公司 | 界面控制方法、装置、电子设备及介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| ES3026583T3 (en) | 2025-06-11 |
| CN110609723A (zh) | 2019-12-24 |
| CN110609723B (zh) | 2021-08-24 |
| US11989390B2 (en) | 2024-05-21 |
| US20220171507A1 (en) | 2022-06-02 |
| EP4020194A4 (en) | 2022-10-19 |
| EP4020194A1 (en) | 2022-06-29 |
| EP4020194B1 (en) | 2025-04-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2021031701A1 (zh) | 显示控制方法及终端设备 | |
| WO2020199758A1 (zh) | 消息显示方法及终端设备 | |
| WO2021197263A1 (zh) | 内容共享方法及电子设备 | |
| CN109857290B (zh) | 一种未读内容显示方法及终端设备 | |
| WO2021093429A1 (zh) | 群聊方法及电子设备 | |
| WO2019184666A1 (zh) | 一种显示内容的方法及终端 | |
| WO2021169954A1 (zh) | 搜索方法及电子设备 | |
| CN109471690B (zh) | 一种消息显示方法及终端设备 | |
| WO2021136159A1 (zh) | 截屏方法及电子设备 | |
| US12028476B2 (en) | Conversation creating method and terminal device | |
| WO2021110053A1 (zh) | 文件发送方法及终端设备 | |
| WO2021104348A1 (zh) | 消息的处理方法及电子设备 | |
| WO2021057290A1 (zh) | 信息控制方法及电子设备 | |
| WO2020215932A1 (zh) | 显示未读消息的方法及终端设备 | |
| WO2021121099A1 (zh) | 消息通知方法及电子设备 | |
| WO2020168882A1 (zh) | 界面显示方法及终端设备 | |
| WO2020192282A1 (zh) | 通知消息显示方法及终端设备 | |
| WO2021057301A1 (zh) | 文件控制方法及电子设备 | |
| CN110471711A (zh) | 应用程序预加载方法及终端设备 | |
| CN108600078A (zh) | 一种通信的方法及终端 | |
| CN110489031A (zh) | 内容显示方法及终端设备 | |
| WO2021017737A1 (zh) | 消息发送方法及终端设备 | |
| WO2020215967A1 (zh) | 内容选中方法及终端设备 | |
| CN110196664A (zh) | 控制方法及终端设备 | |
| WO2021012955A1 (zh) | 界面切换方法及终端设备 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20855765 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 2020855765 Country of ref document: EP Effective date: 20220321 |
|
| WWG | Wipo information: grant in national office |
Ref document number: 2020855765 Country of ref document: EP |