WO2026005253A1 - Dispositif électronique comprenant un projecteur et procédé de fonctionnement correspondant - Google Patents

Dispositif électronique comprenant un projecteur et procédé de fonctionnement correspondant

Info

Publication number
WO2026005253A1
WO2026005253A1 PCT/KR2025/005797 KR2025005797W WO2026005253A1 WO 2026005253 A1 WO2026005253 A1 WO 2026005253A1 KR 2025005797 W KR2025005797 W KR 2025005797W WO 2026005253 A1 WO2026005253 A1 WO 2026005253A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
electronic device
area
information
projector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/KR2025/005797
Other languages
English (en)
Korean (ko)
Inventor
이보나
강승규
김무정
손동일
엄준훤
황선민
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020240088096A external-priority patent/KR20260001430A/ko
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of WO2026005253A1 publication Critical patent/WO2026005253A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]

Definitions

  • Various embodiments according to the present disclosure relate to an electronic device including a projector and a method of operating the same.
  • a display device is a device that outputs content, such as video, images, or text, and can display the content through a display panel.
  • a projector is one such display device.
  • a projector projects a screen containing the content onto an external surface (e.g., a screen), thereby causing light emitted from the projector to appear on the surface. Users can view the content through the screen projected onto the surface.
  • An artificial intelligence system (or integrated intelligence system) is a computer system that implements intelligence, and is a system that performs judgment based on the results of the machine learning data.
  • Artificial intelligence technology can be composed of machine learning (deep learning) technology that uses an algorithm to classify/learn the characteristics of input data on its own, and element technologies that imitate the functions of the human brain, such as cognition and judgment, by utilizing machine learning algorithms.
  • An artificial intelligence model based on generative artificial intelligence can generate content (e.g., text, images, or other media) in response to a prompt containing natural language text requesting the performance of a task.
  • An electronic device may include a memory for storing instructions, a projector for projecting a screen including contents onto a projection surface of an external object, at least one sensor configured to obtain environmental information about a surrounding environment in which the projector projects the screen, and at least one processor. At least one processor may, when the instructions are executed, execute an application for displaying a screen including first contents through the projector. When the instructions are executed, at least one processor may, based on the executed application, determine a projection area including an area where light emitted from the projector is irradiated onto the projection surface. When the instructions are executed, at least one processor may, based on the environmental information, determine a content display area within the projection area.
  • At least one processor when the instructions are executed, can obtain data including second content generated through a machine-learned artificial intelligence model from content information related to the first content based on context information including at least one of the content display area or the environment information. At least one processor, when the instructions are executed, can project a screen including the second content onto the projection surface through the projector.
  • a method of operating an electronic device may include an operation of executing an application that displays a screen including first content through a projector that projects a screen including content onto a projection surface of an external object, and an operation of determining a projection area including an area where light emitted from the projector is irradiated onto the projection surface based on the executed application, and an operation of determining a content display area within the projection area based on environmental information acquired by at least one sensor configured to acquire environmental information about a surrounding environment in which the projector projects a screen, and an operation of acquiring data including second content generated through a machine-learned artificial intelligence model from content information related to the first content based on context information including at least one of the content display area and the environmental information, and an operation of projecting a screen including the second content onto the projection surface through the projector.
  • a computer-readable recording medium may store a computer program that causes an electronic device to execute the above-described method.
  • FIG. 1 is a block diagram of an electronic device within a network environment according to various embodiments of the present disclosure.
  • FIG. 2 is a block diagram illustrating components of an electronic device according to various embodiments of the present disclosure.
  • FIG. 3 is a diagram illustrating an example of an electronic device including a projector projecting a screen according to one embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a process in which an electronic device outputs content through a projector according to one embodiment of the present disclosure.
  • FIG. 5 is a flowchart illustrating a process of an electronic device including a projector generating content using a prompt according to one embodiment of the present disclosure.
  • FIG. 6 is a flowchart illustrating a process in which an electronic device outputs content based on the result of identifying an obstacle, according to one embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of second content generated from first content based on an area where an obstacle is identified by an electronic device according to one embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating an example of second content generated from first content based on an area where an obstacle is identified by an electronic device according to one embodiment of the present disclosure.
  • FIG. 9 is a flowchart illustrating a process by which an electronic device generates a prompt according to one embodiment of the present disclosure.
  • FIG. 10 is a flowchart illustrating a process for determining whether content is viewable based on a plurality of conditions according to one embodiment of the present disclosure.
  • FIG. 11 is a diagram illustrating an example of displaying content based on whether the content is viewable, according to one embodiment of the present disclosure.
  • FIG. 12 is a diagram illustrating an example of obtaining second content by summarizing text included in first content according to one embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating an example of obtaining second content by summarizing an image included in first content and changing the font of text according to one embodiment of the present disclosure.
  • FIG. 14 is a flowchart illustrating a process for generating content in a blank space within a projecting area according to one embodiment of the present disclosure.
  • FIG. 15 is a drawing for explaining an example of creating content in a blank space within a projecting area according to one embodiment of the present disclosure.
  • FIG. 16 is a flowchart illustrating a process for generating second content by replacing a portion of first content according to one embodiment of the present disclosure.
  • FIG. 17 is a drawing for explaining an example of generating second content by replacing a portion of first content according to one embodiment of the present disclosure.
  • FIG. 18 is a flowchart illustrating a process for generating second content by taking into account user preference information according to one embodiment of the present disclosure.
  • FIG. 19 is a diagram illustrating an example of generating content expanded to a certain area by considering user preference information according to one embodiment of the present disclosure.
  • FIG. 20 is a drawing showing a cylindrical electronic device as an example of an electronic device according to one embodiment of the present disclosure.
  • FIG. 21 is a drawing showing a robotic electronic device as an example of an electronic device according to one embodiment of the present disclosure.
  • FIG. 22 is a drawing showing a box-shaped electronic device as an example of an electronic device according to one embodiment of the present disclosure.
  • FIG. 23 is a diagram for explaining an example of a method of operating a machine-learned artificial intelligence model in an electronic device according to one embodiment of the present disclosure.
  • FIG. 24 is a flowchart illustrating a process for an electronic device to generate content according to one embodiment of the present disclosure.
  • connection lines or connecting members between components depicted in the drawings are merely exemplary representations of functional connections and/or physical or circuit connections.
  • connections between components may be represented by various functional connections, physical connections, or circuit connections that may be replaced or added.
  • the projection distance (e.g., the projection distance (330) of FIG. 3) may refer to the distance between a projector (e.g., the display module (160) of FIG. 1, the projector (220) of FIG. 2, and the electronic device (310) including the projector of FIG. 3) and a projection area (e.g., the projection area (320) of FIG. 3) on which a screen including content is projected onto a projection surface of an external object.
  • the projection distance may refer to the shortest distance between the projector and the projection area.
  • the projection distance may also refer to the distance between the projector and the center of the projection area.
  • a projection area (e.g., a projection area (320) of FIG. 3) may refer to an area of a projection surface of an external object onto which light emitted by a projector (e.g., a display module (160) of FIG. 1, a projector (220) of FIG. 2, an electronic device (310) including a projector of FIG. 3) of an electronic device (e.g., an electronic device (101) of FIG. 1, an electronic device (200) of FIG. 2, an electronic device including a projector of FIG. 3) may be projected.
  • the projection area may refer to an area onto which a screen including content may be projected by the electronic device.
  • the projection area may be determined based on an angular range and a projection distance at which the projector may emit light to display a screen.
  • the content display area may refer to an area used to display content generated by an electronic device (e.g., the electronic device (101) of FIG. 1, the electronic device (200) of FIG. 2, or the electronic device (310) including the projector of FIG. 3) within a projection area (e.g., the projection area (320) of FIG. 3).
  • an electronic device e.g., the electronic device (101) of FIG. 1, the electronic device (200) of FIG. 2, or the electronic device (310) including the projector of FIG. 3
  • a projection area e.g., the projection area (320) of FIG. 3
  • the content display area e.g., it may refer to an area where second content generated through a machine-learned artificial intelligence model is projected.
  • context information may include information related to a situation in which an electronic device displays content.
  • the context information may include information collected by a sensor (e.g., a sensor module (176) of FIG. 1, a sensor unit (210) of FIG. 2), information stored in a memory (e.g., a memory (130) of FIG. 1, a memory (230) of FIG. 2), and information generated through a machine-learned artificial intelligence model).
  • the context information may include at least one of environmental information, spatial information, or situational information.
  • environmental information may refer to information about the surrounding environment of a location where a projector (e.g., a display module (160) of FIG. 1, a projector (220) of FIG. 2, or an electronic device (310) including a projector of FIG. 3) is placed.
  • the environmental information may include at least one of a distance between a projection area and a user, an area of the projection area, the number of users, or age information of the users.
  • the environmental information may include information collected through a sensor of an electronic device (e.g., a sensor module (176) of FIG. 1, a sensor unit (210) of FIG. 2) or a sensor included in an external device (e.g., an electronic device (102, 104) of FIG. 1).
  • spatial information may refer to information related to a space in which an electronic device is placed.
  • the spatial information may include information detected by an image sensor among sensors (e.g., the sensor module (176) of FIG. 1, the sensor unit (210) of FIG. 2).
  • the spatial information may include information identified through image recognition of an image acquired by the image sensor.
  • the spatial information may include information related to an object located around an electronic device (e.g., the electronic device (101) of FIG. 1, the electronic device (200) of FIG. 2, the electronic device including a projector (310) of FIG. 3) detected by the image sensor.
  • the spatial information may include at least one of projecting distance information, projecting area information, user location information, or obstacle information.
  • context information may refer to information related to a situation in which a projector (e.g., a display module (160) of FIG. 1, a projector (220) of FIG. 2, or an electronic device (310) including a projector of FIG. 3) projects a screen.
  • the context information may include information acquired from a memory or received from an external device.
  • the context information may include information related to an external situation that may be acquired from a sensor other than an image sensor or an external device.
  • the context information may include at least one of noise information surrounding the electronic device, information related to the user's age, information related to the user's preference, or information related to the battery of the electronic device.
  • content information may refer to information related to the original content, the first content.
  • content information may include at least one of the following: the original content itself, the format of the original content (text, image, video), the category to which the original content belongs, or the amount of information contained in the original content.
  • information contained in the content may be difficult to convey to the user through the screen output by the projector, depending on the projector itself or the surrounding environment. For example, if an obstacle exists in the projection area, the screen containing the content may be obscured by the obstacle, preventing proper transmission of information. Alternatively, the distance between the projection area and the user may be too far, preventing proper transmission of information to the user.
  • various embodiments of the present invention are directed to providing a method and device for generating content to be projected from an electronic device including a projector to smoothly convey information to a user.
  • FIG. 1 is a block diagram of an electronic device (101) within a network environment (100) according to various embodiments of the present disclosure.
  • an electronic device (101) may communicate with an electronic device (102) via a first network (198) (e.g., a short-range wireless communication network), or may communicate with at least one of an electronic device (104) or a server (108) via a second network (199) (e.g., a long-range wireless communication network).
  • the electronic device (101) may communicate with the electronic device (104) via the server (108).
  • the electronic device (101) may include a processor (120), a memory (130), an input module (150), an audio output module (155), a display module (160), an audio module (170), a sensor module (176), an interface (177), a connection terminal (178), a haptic module (179), a camera module (180), a power management module (188), a battery (189), a communication module (190), a subscriber identification module (196), or an antenna module (197).
  • the electronic device (101) may omit at least one of these components (e.g., the connection terminal (178)), or may have one or more other components added.
  • some of these components e.g., the sensor module (176), the camera module (180), or the antenna module (197) may be integrated into one component (e.g., the display module (160)).
  • the processor (120) may, for example, execute software (e.g., a program (140)) to control at least one other component (e.g., a hardware or software component) of the electronic device (101) connected to the processor (120) and perform various data processing or operations.
  • the processor (120) may store commands or data received from other components (e.g., a sensor module (176) or a communication module (190)) in a volatile memory (132), process the commands or data stored in the volatile memory (132), and store result data in a non-volatile memory (134).
  • the processor (120) may include a main processor (121) (e.g., a central processing unit or an application processor) or an auxiliary processor (123) (e.g., a graphics processing unit, a neural processing unit (NPU), an image signal processor, a sensor hub processor, or a communication processor) that can operate independently or together with the main processor (121).
  • a main processor (121) e.g., a central processing unit or an application processor
  • an auxiliary processor (123) e.g., a graphics processing unit, a neural processing unit (NPU), an image signal processor, a sensor hub processor, or a communication processor
  • the auxiliary processor (123) may be configured to use less power than the main processor (121) or to be specialized for a given function.
  • the auxiliary processor (123) may be implemented separately from the main processor (121) or as a part thereof.
  • the auxiliary processor (123) may control at least a portion of functions or states associated with at least one component (e.g., a display module (160), a sensor module (176), or a communication module (190)) of the electronic device (101), for example, on behalf of the main processor (121) while the main processor (121) is in an inactive (e.g., sleep) state, or together with the main processor (121) while the main processor (121) is in an active (e.g., application execution) state.
  • the auxiliary processor (123) e.g., an image signal processor or a communication processor
  • the auxiliary processor (123) may include a hardware structure specialized for processing artificial intelligence models.
  • the artificial intelligence models may be generated through machine learning. This learning can be performed, for example, in the electronic device (101) itself where the artificial intelligence model is executed, or can be performed through a separate server (e.g., server (108)).
  • the learning algorithm can include, for example, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, but is not limited to the examples described above.
  • the artificial intelligence model can include a plurality of artificial neural network layers.
  • the artificial neural network can be one of a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted Boltzmann machine (RBM), a deep belief network (DBN), a bidirectional recurrent deep neural network (BRDNN), deep Q-networks, or a combination of two or more of the above, but is not limited to the examples described above.
  • the artificial intelligence model can additionally or alternatively include a software structure.
  • the memory (130) can store various data used by at least one component (e.g., processor (120) or sensor module (176)) of the electronic device (101).
  • the data can include, for example, software (e.g., program (140)) and input data or output data for commands related thereto.
  • the memory (130) can include volatile memory (132) or non-volatile memory (134).
  • the program (140) may be stored as software in the memory (130) and may include, for example, an operating system (142), middleware (144), or an application (146).
  • the input module (150) can receive commands or data to be used in a component of the electronic device (101) (e.g., a processor (120)) from an external source (e.g., a user) of the electronic device (101).
  • the input module (150) can include, for example, a microphone, a mouse, a keyboard, a key (e.g., a button), or a digital pen (e.g., a stylus pen).
  • the audio output module (155) can output audio signals to the outside of the electronic device (101).
  • the audio output module (155) can include, for example, a speaker or a receiver.
  • the speaker can be used for general purposes, such as multimedia playback or recording playback.
  • the receiver can be used to receive incoming calls. In one embodiment, the receiver can be implemented separately from the speaker or as part of the speaker.
  • the display module (160) can visually provide information to an external party (e.g., a user) of the electronic device (101).
  • the display module (160) may include, for example, a display, a holographic device, or a projector and a control circuit for controlling the device.
  • the display module (160) may include a touch sensor configured to detect a touch, or a pressure sensor configured to measure the intensity of a force generated by the touch.
  • the audio module (170) can convert sound into an electrical signal, or vice versa, convert an electrical signal into sound. According to one embodiment, the audio module (170) can acquire sound through the input module (150), output sound through the sound output module (155), or an external electronic device (e.g., electronic device (102)) (e.g., speaker or headphone) directly or wirelessly connected to the electronic device (101).
  • an external electronic device e.g., electronic device (102)
  • speaker or headphone directly or wirelessly connected to the electronic device (101).
  • the sensor module (176) can detect the operating status (e.g., power or temperature) of the electronic device (101) or the external environmental status (e.g., user status) and generate an electrical signal or data value corresponding to the detected status.
  • the sensor module (176) can include, for example, a gesture sensor, a gyro sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR (infrared) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
  • the interface (177) may support one or more designated protocols that may be used to directly or wirelessly connect the electronic device (101) with an external electronic device (e.g., the electronic device (102)).
  • the interface (177) may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, an SD card interface, or an audio interface.
  • HDMI high definition multimedia interface
  • USB universal serial bus
  • SD card interface Secure Digital Card
  • connection terminal (178) may include a connector through which the electronic device (101) may be physically connected to an external electronic device (e.g., electronic device (102)).
  • the connection terminal (178) may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).
  • the haptic module (179) can convert electrical signals into mechanical stimuli (e.g., vibration or movement) or electrical stimuli that a user can perceive through tactile or kinesthetic sensations.
  • the haptic module (179) can include, for example, a motor, a piezoelectric element, or an electrical stimulation device.
  • the camera module (180) can capture still images and videos.
  • the camera module (180) may include one or more lenses, image sensors, image signal processors, or flashes.
  • the power management module (188) can manage power supplied to the electronic device (101).
  • the power management module (188) can be implemented as, for example, at least a part of a power management integrated circuit (PMIC).
  • PMIC power management integrated circuit
  • a battery (189) may power at least one component of the electronic device (101).
  • the battery (189) may include, for example, a non-rechargeable primary battery, a rechargeable secondary battery, or a fuel cell.
  • the communication module (190) may support the establishment of a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device (101) and an external electronic device (e.g., electronic device (102), electronic device (104), or server (108)), and the performance of communication through the established communication channel.
  • the communication module (190) may operate independently from the processor (120) (e.g., application processor) and may include one or more communication processors that support direct (e.g., wired) communication or wireless communication.
  • the communication module (190) may include a wireless communication module (192) (e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module (194) (e.g., a local area network (LAN) communication module, or a power line communication module).
  • a wireless communication module (192) e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module
  • GNSS global navigation satellite system
  • wired communication module (194) e.g., a local area network (LAN) communication module, or a power line communication module.
  • the corresponding communication module can communicate with an external electronic device (104) via a first network (198) (e.g., a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)) or a second network (199) (e.g., a long-range communication network such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a LAN or WAN)).
  • a first network (198) e.g., a short-range communication network such as Bluetooth, wireless fidelity (WiFi) direct, or infrared data association (IrDA)
  • a second network (199) e.g., a long-range communication network such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a LAN or WAN)
  • a computer network e.g., a
  • the wireless communication module (192) can verify or authenticate the electronic device (101) within a communication network such as the first network (198) or the second network (199) by using subscriber information (e.g., an international mobile subscriber identity (IMSI)) stored in the subscriber identification module (196).
  • subscriber information e.g., an international mobile subscriber identity (IMSI)
  • the wireless communication module (192) can support 5G networks and next-generation communication technologies following the 4G network, such as NR access technology (new radio access technology).
  • the NR access technology can support high-speed transmission of high-capacity data (eMBB (enhanced mobile broadband)), minimization of terminal power and connection of multiple terminals (mMTC (massive machine type communications)), or high reliability and low latency (URLLC (ultra-reliable and low-latency communications)).
  • eMBB enhanced mobile broadband
  • mMTC massive machine type communications
  • URLLC ultra-reliable and low-latency communications
  • the wireless communication module (192) can support, for example, a high-frequency band (e.g., mmWave band) to achieve a high data transmission rate.
  • a high-frequency band e.g., mmWave band
  • the wireless communication module (192) can support a peak data rate (e.g., 20 Gbps or more) for eMBB realization, a loss coverage (e.g., 164 dB or less) for mMTC realization, or a U-plane latency (e.g., 0.5 ms or less for downlink (DL) and uplink (UL), or 1 ms or less for round trip) for URLLC realization.
  • a peak data rate e.g., 20 Gbps or more
  • a loss coverage e.g., 164 dB or less
  • U-plane latency e.g., 0.5 ms or less for downlink (DL) and uplink (UL), or 1 ms or less for round trip
  • the antenna module (197) may form a mmWave antenna module.
  • the mmWave antenna module may include a printed circuit board, an RFIC disposed on or adjacent a first side (e.g., a bottom side) of the printed circuit board and capable of supporting a designated high-frequency band (e.g., a mmWave band), and a plurality of antennas (e.g., an array antenna) disposed on or adjacent a second side (e.g., a top side or a side side) of the printed circuit board and capable of transmitting or receiving signals in the designated high-frequency band.
  • a first side e.g., a bottom side
  • a plurality of antennas e.g., an array antenna
  • At least some of the above components can be interconnected and exchange signals (e.g., commands or data) with each other via a communication method between peripheral devices (e.g., a bus, GPIO (general purpose input and output), SPI (serial peripheral interface), or MIPI (mobile industry processor interface)).
  • peripheral devices e.g., a bus, GPIO (general purpose input and output), SPI (serial peripheral interface), or MIPI (mobile industry processor interface)).
  • commands or data may be transmitted or received between the electronic device (101) and an external electronic device (104) via a server (108) connected to a second network (199).
  • Each of the external electronic devices (102 or 104) may be the same or a different type of device as the electronic device (101).
  • all or part of the operations executed in the electronic device (101) may be executed in one or more of the external electronic devices (102, 104, or 108). For example, when the electronic device (101) is to perform a certain function or service automatically or in response to a request from a user or another device, the electronic device (101) may, instead of or in addition to executing the function or service itself, request one or more external electronic devices to perform the function or at least a part of the service.
  • One or more external electronic devices that receive the request may execute at least a portion of the requested function or service, or an additional function or service related to the request, and transmit the result of the execution to the electronic device (101).
  • the electronic device (101) may process the result as is or additionally and provide it as at least a portion of a response to the request.
  • cloud computing, distributed computing, mobile edge computing (MEC), or client-server computing technology may be used, for example.
  • the electronic device (101) may provide an ultra-low latency service by using distributed computing or mobile edge computing, for example.
  • the external electronic device (104) may include an Internet of Things (IoT) device.
  • the server (108) may be an intelligent server utilizing machine learning and/or a neural network.
  • the external electronic device (104) or the server (108) may be included in the second network (199).
  • the electronic device (101) can be applied to intelligent services (e.g., smart home, smart city, smart car, or healthcare) based on 5G communication technology and IoT-related technology.
  • Electronic devices may take various forms. Electronic devices may include, for example, portable communication devices (e.g., smartphones), computer devices, portable multimedia devices, portable medical devices, cameras, wearable devices, or home appliances. Electronic devices according to the embodiments of this document are not limited to the aforementioned devices.
  • first,” “second,” or “first” or “second” may be used merely to distinguish one component from another, and do not limit the components in any other respect (e.g., importance or order).
  • a component e.g., a first component
  • another e.g., a second component
  • functionally e.g., a third component
  • module used in various embodiments of this document may include a unit implemented in hardware, software, or firmware, and may be used interchangeably with terms such as logic, logic block, component, or circuit.
  • a module may be an integral component, or a minimum unit or part of such a component that performs one or more functions.
  • a module may be implemented in the form of an application-specific integrated circuit (ASIC).
  • ASIC application-specific integrated circuit
  • Various embodiments of the present document may be implemented as software (e.g., a program (140)) including one or more instructions stored in a storage medium (e.g., an internal memory (136) or an external memory (138)) readable by a machine (e.g., an electronic device (101)).
  • a processor e.g., a processor (120)
  • the machine e.g., an electronic device (101)
  • the one or more instructions may include code generated by a compiler or code executable by an interpreter.
  • the machine-readable storage medium may be provided in the form of a non-transitory storage medium.
  • ‘non-transitory’ simply means that the storage medium is a tangible device and does not contain signals (e.g., electromagnetic waves), and the term does not distinguish between cases where data is stored semi-permanently or temporarily on the storage medium.
  • each component e.g., a module or a program of the above-described components may include one or more entities, and some of the entities may be separated and placed in other components.
  • one or more components or operations of the aforementioned components may be omitted, or one or more other components or operations may be added.
  • a plurality of components e.g., a module or a program
  • the integrated component may perform one or more functions of each of the plurality of components identically or similarly to those performed by the corresponding component among the plurality of components prior to the integration.
  • FIG. 2 is a block diagram illustrating components of an electronic device according to various embodiments of the present disclosure.
  • FIG. 3 is a diagram illustrating an example of an electronic device including a projector projecting a screen according to one embodiment of the present disclosure. At least some of the operations related to FIGS. 1 through 24 below may be described with reference to FIG. 3.
  • the sensor unit (210) may obtain environmental information about the surrounding environment of the location where the projector (220) is placed.
  • the environmental information may include information related to objects located around the projector.
  • the environmental information may include information related to at least one of noise or brightness around the projector.
  • the environmental information may include at least one of user age information or user preference information related to a user located around the projector.
  • the environmental information may include battery information related to an electronic device (200) including a projector (e.g., information about the remaining power stored in the battery, information about the battery charge status).
  • the sensor unit (210) may include an image sensor.
  • At least one processor (240) may obtain environmental information including spatial information obtained through the image sensor.
  • the spatial information may include information related to the space where the electronic device is placed.
  • the spatial information may include a projection distance (e.g., a projection distance (330) in FIG. 3), which is a distance between the projector (220) and a screen onto which a screen including content is projected.
  • the spatial information may include information about a projection area (e.g., a projection area (320) in FIG. 3), which is an area onto which a screen including content is projected by the projector (220).
  • the spatial information may include information related to a location of a user (e.g., a user (340) in FIG. 3) located around the projector (220) and recognized through an image sensor.
  • the spatial information may include obstacle information, which is information recognized through an image sensor about an obstacle located between the projector (220) and the projection area.
  • the obstacle information may indicate, for example, an area including an area in the projection area where light emitted from the projector (220) is blocked by an obstacle.
  • the obstacle information may indicate, for example, an area including an area blocked by an obstacle when a user looks at the projection area.
  • the electronic device (200) can sense information about the surroundings of the electronic device through the sensor unit (210).
  • the electronic device can recognize information about the sensing area (360) through sensors such as an image sensor, an ultrasonic sensor, an infrared sensor, a radar sensor, a laser sensor, and/or a lidar sensor.
  • the electronic device can recognize a user (340) within the sensing area (360) and collect location information of the user.
  • the electronic device can determine the distance (350) between the user (340) within the sensing area (360) and the projection area (320).
  • the distance (350) between the user and the projection area may mean the distance to the closest user in the projection area (320).
  • the distance (350) between the user and the projection area may mean the distance to the farthest user in the projection area (320).
  • the distance between the user and the projecting area (350) may mean the distance from the user to the center of the projecting area (320).
  • the projector (220) may project a screen containing content onto a projection surface of an external object.
  • the projector (220) may project a screen containing content by emitting light onto the projection surface of the external object.
  • the projection surface of the external object may include a wall with a white background to ensure that the projected screen is easily recognized.
  • the projection surface of the external object may include a screen on which the projector may project a screen containing content.
  • the projection surface may include a surface onto which light emitted from the projector (220) reaches.
  • the characteristics of the projection surface are not limited to the examples described above.
  • the memory (230) may store instructions.
  • the instructions may be executed by at least one processor (240).
  • the instructions may include code for the electronic device (200) to perform various data processing or calculations.
  • the memory (230) may store instructions for the electronic device (200) to perform data processing or calculations according to at least one of the embodiments described with reference to FIGS. 1 to 24.
  • the at least one processor (240) may execute the instructions stored in the memory (230) to control components of the electronic device (200) or perform calculations.
  • the memory (230) may store data obtained by at least one processor (240) performing a calculation.
  • the memory (230) may store data obtained by the electronic device (200) performing data processing or calculation according to at least one of the embodiments described below with reference to FIGS. 1 to 24.
  • the memory (230) may store at least one of context information and content information.
  • the context information may include environmental information about the surrounding environment of the projector (220).
  • the context information may include information about the content display area.
  • the context information may include spatial information related to an object located within the space surrounding the electronic device (200).
  • the electronic device (200) may identify spatial information through image recognition of an image acquired through an image sensor.
  • the context information may include situational information related to a situation in which the projector (220) projects a screen including content.
  • the content information may include information about content to be displayed on a screen projected and displayed on a projection surface by the projector (220).
  • the content information may include at least one of information about the original content to be displayed, such as the content itself, the format of the content (text, image, video), the category to which the content belongs, or the amount of information included in the content.
  • the operation of the electronic device (200) can be understood as being performed by executing instructions stored in a memory (230) by at least one processor (240).
  • the electronic device (200) may execute an application that displays a screen including first content via the projector (220).
  • the electronic device (200) may execute an application that displays content for presentation purposes.
  • the electronic device (200) may execute an application that displays content linked to at least one of a computer or a mobile electronic device.
  • the electronic device may perform the operation with reference to operation 410 of FIG. 4 .
  • the electronic device (200) may determine a projection area including an area where light emitted from the projector (220) is irradiated onto the projection surface based on the execution of an application that displays a screen including first content.
  • the electronic device (200) may include at least one camera capable of taking pictures in a direction in which light is emitted from the projector (220).
  • the electronic device (200) may determine the projection area based on an image captured by the at least one camera.
  • the electronic device (200) may obtain a depth map based on an image captured by a stereo camera.
  • the electronic device (200) may obtain the depth map based on distance information acquired through a time of flight (TOF) sensor.
  • TOF time of flight
  • the electronic device (200) may determine the projection area based on a depth value included in the depth map and a range (e.g., an angular range) in which light is emitted from the projector (220). For example, the electronic device (200) may receive information about at least one of the color, curvature, step, material, pattern, or color of the projection surface of an external object from the sensor unit (210) or the external device. The electronic device (200) may determine a projection area based on at least one of the color, curvature, step, material, pattern, or color of the projection surface of the external object. The electronic device may perform an operation with reference to operation 420 of FIG. 4.
  • the electronic device (200) may determine a content display area within a projection area (e.g., the projection area (320) of FIG. 3) based on the executed application and environmental information.
  • the content display area may include an area where content is displayed.
  • the electronic device may perform the operation with reference to operation 430 of FIG. 4.
  • the electronic device (200) may obtain data including second content generated through a machine-learned artificial intelligence model from content information related to first content based on context information including at least one of a content display area or environment information based on an executed application. For example, the electronic device (200) may request the artificial intelligence model to generate the second content based on information regarding the determined content display area and content information regarding the first content, which is the original content. For example, the electronic device (200) may generate a prompt to generate the second content from the first content based on the context information. The electronic device (200) may request the artificial intelligence model to generate the second content based on the generated prompt. The electronic device may perform the operation with reference to operation 440 of FIG. 4.
  • the electronic device (200) can project a screen including second content onto a projection surface via a projector (220).
  • the electronic device can perform the operation with reference to operation 450 of FIG. 4.
  • FIG. 4 is a flowchart (400) illustrating a process in which an electronic device (200) including a projector (240) outputs content through the projector (240), according to one embodiment of the present disclosure.
  • each operation may be performed sequentially, but is not necessarily performed sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • an electronic device may execute an application to display a screen including first content through a projector (e.g., the display module (160) of FIG. 1, the projector (220) of FIG. 2).
  • the first content may include original content.
  • the first content may include original content that is a target generated through a machine-learned artificial intelligence model.
  • the application may be executed by a user input.
  • the user input may include a touch input to the electronic device (200).
  • the application may be executed by a user input to display a screen including content through the electronic device (200).
  • the electronic device (200) may execute an application to display content for presentation.
  • the electronic device (200) may execute an application to display content linked to at least one of a computer or a mobile electronic device.
  • an electronic device may determine a projection area including an area where light emitted from a projector (e.g., the display module (160) of FIG. 1, the projector (220) of FIG. 2) is irradiated onto a projection surface, based on an executed application.
  • the electronic device may determine the projection area based on information related to the projection surface of an external object. For example, the electronic device (200) may collect information about the projection surface of an external object through a sensor (e.g., the sensor unit (210) of FIG.
  • the electronic device (200) may detect at least one of a curvature, a step, a pattern, a color, or a material of the projection surface to determine whether the projection surface is a surface on which a screen can be displayed.
  • the electronic device (200) may physically emit light based on sensed information, and determine an area where the emitted light is irradiated on the projection surface to determine a projection area (e.g., a projection area (320) of FIG. 3).
  • the electronic device (200) may determine the projection area as a rectangular shape.
  • the electronic device may determine the projection area as a rectangular shape having the same ratio as a display displayed on a mobile electronic device or a computer device.
  • the electronic device may determine the projection area as an area having a width-to-height ratio of 16:9, 16:10, or 4:3.
  • the shape of the projection area is not limited thereto.
  • the electronic device may determine the projection area differently depending on the arrangement of the projection surface, the shape of the projection surface, or the characteristics of the projector (220).
  • an electronic device may determine a content display area within a projecting area based on an executed application and environmental information.
  • the content display area may include an area excluding an area in which light output from the projector (220) is blocked by an obstacle between the projector (220) and the projecting area (e.g., the projecting area (320) of FIG. 3).
  • the electronic device (200) may identify an area in which the person is captured through a camera, and determine an area excluding a part of the projecting area corresponding to the identified area as a content display area. For example, if a person or an object is positioned between the projector (220) and the projection area, and light is not irradiated to some areas of the projection area, the electronic device (200) may determine an area within the projection area where light is irradiated, excluding the person or object, as a content display area.
  • the electronic device (200) may determine an area including an area excluding an obstacle between the user and the projection area (e.g., the projection area (320) of FIG. 3) as a content display area. For example, if there is an obstacle between a user looking at the projection surface and the projection area, which is the projection surface, the electronic device (200) may set the area excluding the obstacle as a content display area. For example, if there is no person between a projector (220) hanging from the ceiling and a screen onto which light is irradiated from the projector (220), and a person is standing between the user looking at the screen and the screen, the electronic device (200) may determine an area excluding the person between the user and the screen as a content display area.
  • the environmental information may include information receivable from a sensor (210) or an external device regarding the environment surrounding the electronic device (200).
  • the environmental information may include at least one of obstacle information between a projector (220) and a projection area (e.g., a projection area (320) of FIG. 3) collected through an image sensor (e.g., a sensor module (176) of FIG. 1, a sensor unit (210) of FIG. 2), or obstacle information between a user (e.g., a user (340) of FIG. 3) and a projection area (e.g., a projection area (320) of FIG. 3).
  • a projection area e.g., a projection area (320) of FIG. 3
  • an electronic device may obtain data including second content generated through an artificial intelligence model.
  • the electronic device 200 may obtain data including second content generated through a machine-learned artificial intelligence model from content information related to first content based on context information including a content display area.
  • the electronic device 200 may obtain data regarding second content generated to be displayed in the content display area from content information related to the first content, which is original content.
  • the electronic device 200 may obtain data including second content summarized based on at least one of the size or shape of the content display area through content information related to the first content composed of text.
  • the electronic device (200) may obtain data including second content edited to be displayed within a content display area for first content composed of images through content information.
  • the electronic device (200) may obtain data including second content edited to be displayed within a content display area for first content composed of videos through content information.
  • the electronic device (200) may obtain data including second content generated through a machine-learned artificial intelligence model from content information related to the first content based on context information including environmental information.
  • the electronic device (200) may obtain data including second content generated through a machine-learned artificial intelligence model from content information based on visibility information.
  • the electronic device (200) may obtain data regarding second content generated by summarizing text included in original content based on visibility information indicating that it is difficult for users to ensure visibility.
  • the electronic device (200) may obtain data regarding second content generated by enlarging original content based on visibility information indicating that it is difficult for users to ensure visibility.
  • the electronic device (200) can obtain data regarding second content generated by synthesizing original content based on visibility information that is difficult to secure for users.
  • the electronic device (200) can obtain data including second content through a machine-learned artificial intelligence model.
  • the electronic device (200) can reproduce data including second content through a deep learning technology that creates new content based on learned content.
  • Deep learning technology may include Gen AI (generative AI (artificial intelligence)) and LLM (large language model).
  • Gen AI generative AI (artificial intelligence)
  • LLM large language model
  • the electronic device can obtain data regarding the second data by using a machine-learned artificial intelligence model that creates new content based on a prompt.
  • the electronic device can determine a prompt requesting the artificial intelligence model to create content based on context information.
  • the electronic device can obtain data regarding the second data through the artificial intelligence model by inputting the determined prompt into the artificial intelligence model.
  • an electronic device may project a screen including second content onto a projection surface.
  • the electronic device (200) may project a screen including second content onto the projection surface through a projector (220).
  • the electronic device (200) may project a screen including second content onto an area corresponding to a projection area.
  • a content display area including second content may be included in the projection area. For example, if the electronic device (200) sets an area excluding obstacles as a content display area, the electronic device (200) may project the second content onto an area corresponding to the content display area.
  • FIG. 5 is a flowchart (500) illustrating a process for generating content using a prompt by an electronic device including a projector, according to one embodiment of the present disclosure.
  • the operations may be performed sequentially, but are not necessarily sequential. For example, the order of the operations may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 5 may be performed subsequent to operation 430 of FIG. 4 .
  • the electronic device (200) may generate a prompt including information related to properties of the second content based on at least one of context information and content information.
  • the electronic device (200) may generate a prompt based on the projection distance corresponding to context information (e.g., the projection distance (330) of FIG. 3) and the distance between the user and the projection area (e.g., 350 of FIG. 3). For example, if the projection distance (e.g., 330 of FIG. 3) or the distance between the user and the projection area (e.g., 350 of FIG. 3) is greater than a threshold distance, the electronic device (200) may determine that visibility cannot be secured and generate a prompt to generate summarized content that reduces the number of characters.
  • context information e.g., the projection distance (330) of FIG. 3
  • the distance between the user and the projection area e.g., 350 of FIG. 3
  • the electronic device (200) may generate a prompt to generate content that provides an enlarged portion that is determined to be important. For example, the electronic device (200) may generate a prompt to reduce the number of texts included in the content and increase the font size of the characters included in the text as the distance between the user and the projection area increases. For example, if the projection distance or the distance between the user and the projection area is greater than a threshold distance, the electronic device (200) may generate a prompt to generate summarized or expanded content centered on content reflecting the user's preference information. For example, the electronic device (200) may generate a prompt to crop an image or video with content related to the user's preference information. For example, the electronic device (200) may generate a prompt to include phrases related to the user's preference information or to reduce the amount of text.
  • the electronic device (200) may generate a prompt based on the projection area corresponding to the context information (e.g., the projection area (320) of FIG. 3). For example, if there is a curve or a step on the projection surface of an external object corresponding to the projection area, the electronic device (200) may generate a prompt to generate content by using at least one of a method of enlarging distant content or a method of providing it by increasing its clarity. For example, if there is a pattern on the projection surface of an external object corresponding to the projection area, the electronic device (200) may generate a prompt to generate content by using at least one of a method of increasing the color contrast intensity of the background or a method of outlining or black and white processing.
  • the context information e.g., the projection area (320) of FIG. 3
  • the electronic device (200) may generate a prompt to generate content by using at least one of a method of enlarging distant content or a method of providing it by increasing its clarity.
  • the electronic device (200) may generate a prompt to generate content by adjusting at least one of brightness, contrast, or color temperature of the non-white portion. For example, the electronic device (200) may compare the projecting area with the area where the first content is displayed, and if there is an area where the first content is not displayed, it may generate a prompt to generate content related to the first content in an area excluding the first content.
  • the electronic device (200) may generate a prompt based on at least one of obstacle information or a content display area corresponding to context information. For example, if at least a portion of the projection area is obscured by an obstacle, the electronic device (200) may generate a prompt to generate content in an area corresponding to an unobstructed area of the projection area. For example, the electronic device (200) may generate a prompt to generate content corresponding to an area corresponding to the content display area.
  • the electronic device (200) may generate a prompt based on user information corresponding to context information. For example, if the user who is farthest from the plurality of users is at a distance greater than a threshold distance, the electronic device (200) may generate a prompt to generate secondary content by enlarging or summarizing the content. For example, if a user performs a set action, the electronic device (200) may generate a prompt to generate enlarged or summarized content.
  • the set action may be at least one of a user approaching the projection area or a user extending their face toward the projection area, and may be an action related to visibility.
  • the electronic device (200) may generate a prompt based on noise information surrounding the electronic device (200), which corresponds to context information. For example, if the ambient noise exceeds a threshold value, the electronic device (200) may generate a prompt to reduce the number of words included in the voice data or generate content with at least one of the volume or sound changed so that users can hear it even in an environment where ambient noise exists.
  • the electronic device (200) may generate a prompt based on information about the electronic device (200) corresponding to context information. For example, if the electronic device (200) is a movable device, a prompt may be generated to generate content that can be moved from a first location to a second location and then projected onto a projection area at the second location. For example, if the battery information of the electronic device (200) indicates that the battery is below a threshold value, the electronic device (200) may generate a prompt to generate content that reduces the amount of text, reduces the number of images, or reduces at least one of the resolution or brightness to reduce power consumption. For example, if the battery of the electronic device (200) is above a threshold value or is being charged, a prompt may be generated to generate content that increases at least one of the resolution or brightness.
  • the electronic device (200) may generate a prompt including information related to the properties of the second content based on at least one of the content display area or the content information. For example, the electronic device (200) may generate a prompt to generate second content from the first content in accordance with the content display area based on the content display area determined in operation 430 of FIG. 4 and content information related to the first content, which is the original content. For example, if the content display area is smaller than the area where the first content is displayed, the electronic device (200) may generate a prompt to generate second content that is a reduced or summarized version of the first content in accordance with the content display area based on the content display area and the content information.
  • the electronic device (200) may generate a prompt to generate second content in accordance with the content display area based on the content display area and the content information.
  • the second content may include the first content displayed in the first area and content related to the first content.
  • Content associated with the first content may be displayed in a second area different from the first area.
  • the projection area may include the first area and the second area.
  • the electronic device (200) may generate a prompt including information related to the properties of the second content based on the visibility information. For example, if the electronic device (200) determines that visibility is secured, it may output content identical to the first content, which is the original content. For example, if the electronic device (200) determines that visibility is not secured, it may generate a prompt to generate second content from the first content. The electronic device (200) may generate a prompt to summarize the text of the first content. The electronic device (200) may generate a prompt to change the font of the first content. The electronic device (200) may generate a prompt to enlarge an image or video of the first content.
  • the electronic device (200) may generate a prompt to synthesize a plurality of images or videos included in the first content.
  • a process for generating a prompt by determining whether visibility is secured by an electronic device (200) is described in FIGS. 9 and 10.
  • the electronic device (200) may obtain data including second content by inputting a prompt to an artificial intelligence model.
  • the electronic device (200) may obtain data including second content by inputting a prompt to a machine-learned artificial intelligence model.
  • the machine-learned artificial intelligence model may be a generative AI.
  • the electronic device (200) may input the generated prompt to the machine-learned artificial intelligence model and obtain data including second content as output data through the prompt, which is a single input data unit.
  • FIG. 6 is a flowchart (600) illustrating a process for identifying an obstacle and outputting content when an obstacle is placed between a projection area and a projector, according to an embodiment of the present disclosure.
  • each operation may be performed sequentially, but is not necessarily performed sequentially. For example, the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 6 may be performed subsequent to operation 420 of FIG. 4. At least some operations of FIG. 6 will be described below with reference to FIGS. 7 and 8.
  • FIG. 7 and FIG. 8 are diagrams illustrating an example of second content generated from first content based on an area in which an obstacle is identified by an electronic device, according to an embodiment of the present disclosure.
  • the electronic device (200) can identify an obstacle.
  • the electronic device (200) can identify an obstacle placed between a projection area (e.g., the projection area (320) of FIG. 3) and a projector (e.g., the electronic device (310) including the projector of FIG. 3).
  • a projection area e.g., the projection area (320) of FIG. 3
  • a projector e.g., the electronic device (310) including the projector of FIG. 3
  • the electronic device (200) can identify the object as an obstacle.
  • the electronic device (200) can identify if a person is located between the projector (220) and the projection area included in the projection surface of an external object (e.g., the projection area (320) of FIG.
  • the electronic device (200) can identify the person as an obstacle.
  • the electronic device (200) can identify the obstacle through the sensor unit (210).
  • the electronic device (200) can identify an obstacle placed between a user (e.g., the user (340) of FIG. 3) and a projection area (e.g., the projection area (320) of FIG. 3). For example, if an obstacle exists in the direction in which the user (e.g., the user (340) of FIG. 3) views the projection surface of an external object, the electronic device (200) can identify the obstacle through environmental information including user information.
  • the electronic device (200) may determine a content display area as an area that excludes an area including an obstacle from the projecting area.
  • the electronic device (200) may determine a content display area by excluding an area including an identified obstacle from the projecting area. For example, in situation 701 of FIG. 7, if no obstacle is identified in the projecting area (711), the electronic device (200) may determine a content display area (713) as the same area as the projecting area (711). In situation 703 of FIG. 7, if an obstacle (735) is identified in the projecting area (731), the electronic device (200) may determine a content display area (733) as an area that excludes the obstacle (735) from the projecting area (731).
  • the electronic device (200) can determine a content display area (813) as an area excluding the obstacle (815) in the projecting area (811).
  • the electronic device (200) can determine a content display area (833) as an area excluding the obstacle (835) in the projecting area (831).
  • the electronic device (200) may obtain data including second content generated through an artificial intelligence model.
  • operation 650 may be the same operation as operation 440 of FIG. 4.
  • the second content generated through the artificial intelligence model may be content that summarizes the text of the first content.
  • the content displayed in the content display area (713) illustrated in situation 701 may be the first content, which is the original content.
  • the content displayed in the content display area (733) illustrated in situation 703 may be the second content generated by summarizing the text of the first content.
  • the second content generated through the artificial intelligence model may be an image or video that summarizes, selects, synthesizes, or enlarges an image or video of the first content.
  • the content displayed in the content display area (713) illustrated in situation 701 may be the first content, which is the original content.
  • the content displayed in the content display area (733) illustrated in situation 703 may be second content generated by selecting and enlarging an image of the first content.
  • the second content generated through the artificial intelligence model may be content that displays the first content in the content display area excluding obstacles.
  • the content displayed in the projecting area (811) illustrated in situation 801 may be the original content, the first content.
  • the content displayed in the content display area (833) illustrated in situation 803 may be second content generated by displaying the first content in the content display area (833).
  • the electronic device (200) may generate the second content by rearranging the first content in the content display area (833) excluding obstacles (835).
  • operation 660 the electronic device (200) may project a screen including second content onto a projection surface.
  • operation 660 may be the same operation as operation 450 of FIG. 4.
  • the electronic device (200) may determine whether the identified obstacle is moving. According to one embodiment, the electronic device (200) may determine whether the identified obstacle is moving through the sensor unit (210). For example, the electronic device (200) may determine whether the obstacle is moving through image recognition of an image acquired through an image sensor. For example, the electronic device (200) may determine whether the obstacle is moving through information received from an external electronic device (102, 104). According to one embodiment, if the obstacle identified by the electronic device (200) is moving, the electronic device (200) may perform operation 680. According to one embodiment, if the obstacle identified by the electronic device (200) is not moving, the electronic device (200) may terminate the process of FIG. 6.
  • the electronic device (200) may change the content display area.
  • the electronic device (200) may change the content display area when the identified obstacle moves. For example, when the identified obstacle (e.g., 735 of FIG. 7, 815 of FIG. 8, 835 of FIG. 8) moves, the electronic device (200) may change the content display area by excluding the area including the identified obstacle within the projecting area.
  • the content display area changed by the electronic device (200) may include an area different from the content display area determined before the obstacle moves.
  • the electronic device (200) may project a screen including third content, which is different from the second content, onto a projection surface based on a changed content display area.
  • the electronic device (200) may obtain data including third content, which is different from the second content, generated through a machine-learned artificial intelligence model from content information related to the first content, based on the changed content display area.
  • the electronic device (200) may obtain data including third content, which is different from the second content, through a machine-learned artificial intelligence model from content information, based on the changed content display area.
  • the third content may be content displayed in a different area from the second content.
  • the electronic device (200) may project a screen including third content, which is different from the second content, onto a projection surface.
  • the electronic device (200) may project the third content on a content display area newly set to match the changed location of the obstacle.
  • the electronic device (200) may perform operation 670 after projecting a screen including third content onto a projection surface.
  • FIG. 9 is a flowchart (900) illustrating a process by which an electronic device generates a prompt, according to an embodiment of the present disclosure.
  • the operations may be performed sequentially, but are not necessarily performed sequentially. For example, the order of the operations may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 9 may be one embodiment of operation 540 of FIG. 5. At least some operations of FIG. 9 will be described below with reference to FIGS. 11, 12, and 13.
  • FIG. 11 is a diagram illustrating an example of displaying content based on whether the content is viewable, according to an embodiment of the present disclosure.
  • FIG. 12 is a diagram illustrating an example of obtaining second content by summarizing text included in first content, according to an embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating an example of obtaining second content by summarizing an image included in first content and changing the font of the text, according to an embodiment of the present disclosure.
  • the electronic device (200) may compare information related to the visibility of content with a threshold.
  • the information related to the visibility of content may include a distance between a projection area (e.g., projection area (320) of FIG. 3) and a user (e.g., 350 of FIG. 3).
  • the electronic device (200) may determine the distance between the projection area and the user based on user information recognized through the sensor unit (210) (e.g., user (340) of FIG. 3) and information about the projection area located on the projection surface of an external object.
  • the information related to the visibility of content may include the area of the projection area (e.g., projection area (320) of FIG. 3).
  • the electronic device (200) may collect information about the area of the projection area through the sensor unit (210).
  • the information related to the visibility of content may include the number of users.
  • information related to the visibility of content may include the user's age information.
  • the electronic device (200) may collect the user's age information from memory (230) or an external device.
  • the user's age information may be classified by the number of users by age group.
  • the electronic device (200) may obtain visibility information.
  • the electronic device (200) may obtain visibility information through information related to the visibility of content.
  • the electronic device (200) may obtain visibility information indicating that visibility is secured when users' visibility of the content is secured.
  • the electronic device (200) may obtain visibility information indicating that the user's visibility is secured.
  • the electronic device (200) may obtain visibility information indicating that visibility is not secured when users' visibility of the content is not secured. As illustrated in situation 1103 of FIG. 11, when a large number of users use electronic devices in a large space such as a classroom, if the distance between the users and the projection area is greater than a threshold and the number of users is greater than a threshold, the electronic device (200) can obtain visibility information indicating that the user's visibility is not secured.
  • the electronic device (200) may generate a prompt using the acquired visibility information.
  • the electronic device (200) may generate a prompt to generate content identical to the first content, which is the original content, using the visibility information indicating that the visibility of users is secured. For example, as illustrated in situation 1101 of FIG. 11, if the electronic device (200) acquires visibility information indicating that the visibility is secured, the electronic device (200) may generate a prompt to generate content identical to the first content, which is the original content.
  • the electronic device (200) may generate a prompt to produce the first content by modifying it using the visibility information indicating that the visibility of users is not secured. For example, as illustrated in situation 1103 of FIG.
  • the electronic device (200) may generate a prompt to summarize the text of the first content, which is the original content. For example, when the electronic device (200) obtains visibility information indicating that visibility is not secured, the electronic device (200) may generate a prompt to summarize text as illustrated in FIG. 12.
  • the electronic device (200) may generate a prompt to recognize text in the first content (1201), which is the original content, and generate second content (1203) with a reduced number of texts.
  • the electronic device (200) may generate a prompt to change the font of the text in the first content (1201), which is the original content, and generate second content (1203) with an expanded font size.
  • the electronic device (200) may generate a prompt to summarize or select an image or video and change the font of the text as illustrated in FIG. 13.
  • the electronic device (200) may generate a prompt to select an image in the first content (1301), which is the original content, and generate second content (1303) with a changed font of the text.
  • FIG. 10 is a flowchart (1000) illustrating a process for determining whether content is viewable based on a plurality of conditions, according to one embodiment of the present disclosure.
  • each operation may be performed sequentially, but is not necessarily performed sequentially.
  • the order of each operation may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 10 may be one embodiment of operation 540 of FIG. 5.
  • At least some operations of FIG. 10 will be described below with reference to FIGS. 11, 12, and 13.
  • FIG. 11 is a diagram illustrating an example of displaying content based on whether the content is viewable, according to one embodiment of the present disclosure.
  • FIG. 11 is a diagram illustrating an example of displaying content based on whether the content is viewable, according to one embodiment of the present disclosure.
  • FIG. 12 is a diagram illustrating an example of obtaining second content by summarizing text included in first content, according to one embodiment of the present disclosure.
  • FIG. 13 is a diagram illustrating an example of obtaining second content by summarizing an image included in first content and changing the font of text according to one embodiment of the present disclosure.
  • the electronic device (200) may obtain context information.
  • the context information may include at least one of the distance between the projection area and the user, the area of the projection area, the number of users, or the age information of the users.
  • the electronic device (200) may receive the context information from the sensor unit (210), the memory (230), or an external device.
  • operation 1010 may be a part of operation 910 of FIG. 9 .
  • the electronic device (200) may determine whether a distance among context information is greater than or equal to a first threshold.
  • operation 1020 may be a part of operation 910 of FIG. 9.
  • the electronic device (200) may determine whether a distance between a projecting area and a user is greater than or equal to a first threshold. For example, in situation 1101 of FIG. 11, the electronic device (200) may determine that a distance between a projecting area and a user is less than or equal to a first threshold. For example, in situation 1103 of FIG. 11, the electronic device (200) may determine that a distance between a projecting area and a user is greater than or equal to a first threshold.
  • the first threshold may be changed according to characteristics and setting information of the electronic device (200). According to one embodiment, if the electronic device (200) determines that a distance is greater than or equal to a first threshold, the electronic device (200) may perform operation 1070. According to one embodiment, the electronic device (200) may perform operation 1030 when it is determined that the distance is less than the first threshold or there is no distance information.
  • the electronic device (200) may determine whether the area of the context information is less than or equal to a second threshold. According to one embodiment, operation 1030 may be a part of operation 910 of FIG. 9. According to one embodiment, the electronic device (200) may determine whether the area of the projecting area is less than or equal to a second threshold. For example, the electronic device (200) may determine the area of the projecting area (e.g., the projecting area (320) of FIG. 3) and compare it with a second threshold. For example, the electronic device (200) may set a second threshold for determining whether the area of the projecting area is such that the user's visibility is not determined.
  • a second threshold for determining whether the area of the projecting area is such that the user's visibility is not determined.
  • the electronic device (200) may perform operation 1070.
  • the electronic device (200) may perform operation 1040 when it is determined that the area exceeds the second threshold or there is no area information.
  • the electronic device (200) may determine whether the number of users among the context information is greater than or equal to a third threshold.
  • operation 1040 may be a part of operation 910 of FIG. 9.
  • the electronic device (200) may determine whether the number of users is greater than or equal to the third threshold. For example, in situation 1101 of FIG. 11, the electronic device (200) may determine that the number of users in a room around the electronic device (200) is less than the third threshold. For example, in situation 1103 of FIG. 11, the electronic device (200) may determine that the number of users in a classroom around the electronic device (200) is greater than or equal to the third threshold.
  • the third threshold may be changed according to the characteristics of the electronic device (200), information surrounding the electronic device (200), user information, and setting information. According to one embodiment, the electronic device (200) may perform operation 1070 when it is determined that the number of users is greater than or equal to the third threshold. According to one embodiment, the electronic device (200) may perform operation 1050 when it is determined that the number of users is less than the third threshold or there is no user information.
  • the electronic device (200) may determine whether the age among the context information is equal to or greater than the fourth threshold.
  • operation 1050 may be a part of operation 910 of FIG. 9.
  • the electronic device (200) may determine whether the age of the users is equal to or greater than the fourth threshold. For example, in situation 1101 of FIG. 11, if a college student user is using the electronic device (200), the electronic device (200) may determine that the age of the user is less than the fourth threshold based on the user information. For example, in situation 1103 of FIG. 11, if adult students of various ages are in the classroom, the electronic device (200) may determine that the ages of the users are equal to or greater than the fourth threshold.
  • the ages of the users may be determined based on the users who are older.
  • the fourth threshold may be changed depending on the characteristics of the electronic device (200), information surrounding the electronic device (200), user information, and setting information. According to one embodiment, if the electronic device (200) determines that the user's age is equal to or greater than the fourth threshold, the electronic device (200) may perform operation 1070. According to one embodiment, if the electronic device (200) determines that the user's age is less than the fourth threshold or there is no user age information, the electronic device (200) may perform operation 1060.
  • the electronic device (200) may obtain visibility information indicating that visibility to users is secured.
  • operation 1060 may be a part of operation 920 of FIG. 9.
  • operation 1070 the electronic device (200) may obtain visibility information indicating that visibility to users is not secured.
  • operation 1070 may be a part of operation 920 of FIG. 9.
  • operation 1080 the electronic device (200) may generate a prompt using visibility information.
  • operation 1080 may be a part of operation 930 of FIG. 9.
  • FIG. 14 is a flowchart (1400) illustrating a process for generating content in a blank space within a projection area, according to one embodiment of the present disclosure.
  • the respective operations may be performed sequentially, but are not necessarily performed sequentially. For example, the order of the respective operations may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 14 may be performed subsequent to operation 420 of FIG. 4. At least some of the operations of FIG. 14 will be described below with reference to FIG. 15.
  • FIG. 15 is a diagram illustrating an example of generating content in a blank space within a projection area, according to one embodiment of the present disclosure.
  • the electronic device (200) may compare the area of the projecting area with the area where the first content is displayed. According to one embodiment, the electronic device (200) may compare the area of the projecting area determined in operation 420 with the area of the area where the first content, which is the original content, is displayed. For example, in situation 1510 of FIG. 15, the electronic device (200) may compare the area of the area (1513) where the first content, which is the original content, is displayed with the area of the projecting area (1511). The electronic device may determine that the area of the projecting area (1511) is larger than the area (1513) where the first content is displayed.
  • the electronic device (200) may perform operation 1420. According to one embodiment, if the electronic device (200) determines that the area of the projecting area is less than or equal to the area of the area where the first content is displayed, the electronic device (200) may perform operation 430.
  • the electronic device (200) may determine whether the second content can be generated in an area corresponding to the determined content display area. For example, the electronic device (200) may determine whether the second content can be generated based on context information. The electronic device (200) may determine whether the second content can be generated based on at least one of the capacity of the memory (230), battery information of the electronic device (200), the communication speed of the electronic device (200), the number of applications running on the electronic device, or user information. For example, the electronic device (200) may determine that the remaining capacity of the memory (230) is insufficient to generate the second content if it is below a threshold value. The electronic device may determine that the remaining capacity of the memory is sufficient to generate the second content if it exceeds the threshold value.
  • the electronic device may determine that the battery is sufficient to generate the second content if it is above a threshold value based on battery information.
  • the electronic device may determine that a situation is sufficient to generate secondary content if the communication speed is above a threshold.
  • the electronic device may determine that a situation is sufficient to generate secondary content if the Internet speed is above a threshold.
  • the electronic device may determine that a situation is sufficient to generate secondary content if the number of applications running on the electronic device is below a threshold.
  • the electronic device (200) may perform operation 1430. According to one embodiment, if the electronic device (200) does not determine that the situation is sufficient to generate the second content, it may perform operation 1440.
  • the electronic device (200) may generate second content including an outpainting image or an outpainting video.
  • an outpainting image or an outpainting video may refer to an image or video that can be displayed by extending from existing content around an area where existing content (e.g., an original image, an original video) is displayed on the electronic device (e.g., an image or video including an object that is continuously displayed from the boundary of the original image or the original video).
  • the electronic device may output second content including the existing content and the outpainting image. For example, referring to FIG.
  • the electronic device (200) may generate second content (1531) of a situation 1530 including an outpainting image by using first content (1513), which is existing content of a situation 1510.
  • the electronic device (200) can generate content (1535) that is displayed as an extension from existing content (1513), which is an outpainting image, and output second content (1531) that displays the generated outpainting content (1535) and existing content (1533) together.
  • the present invention is not limited thereto.
  • the electronic device (200) can also generate content related to existing content in a manner other than generating an outpainting image, and display the content around the existing content.
  • the electronic device (200) may determine an area corresponding to the projection area as a content display area and generate second content including an outpainting image (1535) or an outpainting video in the area corresponding to the content display area. For example, in a situation where the area of the projection area is larger than the area of the area where the first content is displayed, the electronic device (200) may determine the area corresponding to the projection area as the content display area. For example, in situation 1510 of FIG. 15, the electronic device (200) may determine an area corresponding to the projection area (1511) as the content display area (1511).
  • the electronic device (200) may determine a region including a first region where first content is displayed and a second region different from the first region as a content display region. For example, in situation 1510 of FIG. 15 , the electronic device (200) may determine a first region (1513) where first content is displayed and a second region (1515) different from the first region (1513) and included in the projecting region (1511) as a content display region.
  • the electronic device (200) may generate second content including an outpainting image generated by a machine-learned artificial intelligence model from content information related to the first content based on context information including a content display area.
  • the electronic device (200) may display, based on the context information including the content display area, first content arranged in the first area and including a first image, and second content including a second image generated by the artificial intelligence model and displayed in a second area different from the first area. For example, as illustrated in FIG.
  • the electronic device (200) may display, based on the determined content display area (1531), first content (1533) arranged in the first area (1533) and second content (1535) generated by the artificial intelligence model and displayed in a second area (1535) in a projection area (1531) different from the first area (1533).
  • the first content or the second content may include at least one of a video, text, or image.
  • the electronic device (200) may generate second content (1531) using an artificial intelligence model and perform operation 450 as a subsequent operation.
  • an ambient light image or an ambient light video may refer to an image or video that may be displayed by extending from existing content around an area where existing content (e.g., an original image or an original video) is displayed.
  • the electronic device may output second content including the existing content and an ambient light image or video.
  • the ambient light image may be an image generated using at least one of a color or an outline associated with the existing content.
  • content generated based on an ambient light technique may include simplified content compared to content generated based on an outpainting technique. For example, referring to FIG.
  • the electronic device (200) can generate second content (1551) of situation 1550 including an ambient light image (1555) or an ambient light video using first content (1513), which is existing content of situation 1510.
  • the electronic device (200) can generate an ambient light image or an ambient light video (1555) associated with the existing content (1513) using colors and outlines associated with the existing content (1513) through an ambient light technique, and output second content (1551) that displays the generated content (1555) and the existing content (1553) together.
  • the electronic device may generate an ambient light image or an ambient light video when it is difficult to smoothly generate second content based on at least one of the capacity of the memory (230), battery information of the electronic device (200), communication speed of the electronic device (200), number of applications running on the electronic device, or user information.
  • the present invention is not limited thereto.
  • the electronic device (200) may also generate content related to existing content in a manner other than the ambient light technique and display the content around the existing content.
  • the electronic device (200) may determine an area corresponding to the projection area as a content display area and generate second content including ambient light content in the area corresponding to the content display area. For example, in a situation where the area of the projection area is larger than the area of the area where the first content is displayed, the electronic device (200) may determine the area corresponding to the projection area as the content display area. For example, in situation 1510 of FIG. 15, the electronic device (200) may determine an area corresponding to the projection area (1511) as the content display area (1511).
  • the electronic device (200) may determine a region including a first region where first content is displayed and a second region different from the first region as a content display region. For example, in situation 1510 of FIG. 15 , the electronic device (200) may determine a first region (1513) where first content is displayed and a second region (1515) different from the first region (1513) and included in the projecting region (1511) as a content display region.
  • the electronic device (200) may generate second content including outpainting content through an artificial intelligence model machine-learned from content information related to the first content based on context information including the content display area.
  • the electronic device (200) may display, based on the context information including the content display area, first content arranged in the first area and including a first image, and second content including a second image generated by the artificial intelligence model and displayed in a second area different from the first area. For example, as illustrated in FIG.
  • the electronic device (200) may display, based on the determined content display area (1551), first content (1553) arranged in the first area (1553) and second content (1555) generated by the artificial intelligence model and displayed in a second area (1555) in a projection area (1551) different from the first area (1553).
  • the first content or the second content may include at least one of a video, text, or image.
  • the electronic device (200) may generate second content (1551) using an artificial intelligence model and perform operation 450 as a subsequent operation.
  • FIG. 16 is a flowchart (1600) illustrating a process for generating second content by replacing a portion of first content, according to one embodiment of the present disclosure.
  • the operations may be performed sequentially, but are not necessarily performed sequentially. For example, the order of the operations may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 16 may be performed subsequent to operation 420 of FIG. 4. At least some of the operations of FIG. 16 will be described below with reference to FIG. 17.
  • FIG. 17 is a diagram illustrating an example of generating second content by replacing a portion of first content, according to one embodiment of the present disclosure.
  • the electronic device (200) may determine a main area and a sub area within the first content. According to one embodiment, the electronic device (200) may determine a main area and a sub area within the content display area. For example, in situation 1710 of FIG. 17, the electronic device (200) may determine the content display area as an area including both the third area (1711) and the fourth area (1713). The electronic device (200) may determine the main area (1711) and the sub area (1713) within the content display area.
  • the third area (1711), which is the main area (1711) may be an area including a location where the user is gazing.
  • the electronic device (200) may use user information to determine information about the location where the user is gazing, and the electronic device (200) may determine the area including the location where the user is gazing as the main area (1711).
  • the first content may be content that includes both the third area (1711) and the fourth area (1713), as illustrated in situation 1710 of FIG. 17.
  • the electronic device (200) may generate content corresponding to a sub-region based on the content of the main region.
  • the electronic device (200) may generate content corresponding to the sub-region through a machine-learned artificial intelligence model based on the determined content of the main region.
  • the electronic device (200) may generate content (1723) corresponding to the sub-region through a machine-learned artificial intelligence model based on the determined content (1711) of the main region.
  • the content generated in the sub-region (1723) may be content generated through an artificial intelligence model based on the content of the third region (1721), which is the main region (1721).
  • the content (1723) generated in the sub-region may be content that is related to at least one of the background, a person, or a feeling of the content (1721) of the main region.
  • the electronic device (200) may obtain second content in which the sub-region is replaced with the generated content.
  • the electronic device (200) may obtain second content including content placed in the main region (third region) and content generated by an artificial intelligence model and placed in the sub-region (fourth region).
  • the electronic device (200) may obtain second content including content placed in the main region (1721) and content generated by an artificial intelligence model and placed in the sub-region (1723).
  • the content in the third region (1721) may be placed as part of the second content without change.
  • the original content (1713) in the fourth region may be generated by an artificial intelligence model and replaced in the fourth region (1723).
  • the electronic device (200) may acquire second content including a third image arranged in a third area (1721) and a fifth image (1723) generated by an artificial intelligence model and displayed in a fourth area (1723).
  • the fifth image (1723) may be an image related to the third image (1711).
  • the content according to embodiments may be embodiments including at least one of text, an image, or an image.
  • the electronic device (200) may perform operation 450 of FIG. 4 .
  • FIG. 18 is a flowchart (1800) illustrating a process for generating second content by considering user preference information according to one embodiment of the present disclosure.
  • the respective operations may be performed sequentially, but are not necessarily performed sequentially. For example, the order of the respective operations may be changed, and at least two operations may be performed in parallel.
  • the process illustrated in FIG. 18 may be performed subsequent to operation 420 of FIG. 4. At least some of the operations of FIG. 18 will be described below with reference to FIG. 19.
  • FIG. 19 is a diagram illustrating an example of generating content expanded to a certain area by considering user preference information according to one embodiment of the present disclosure.
  • the electronic device (200) may determine whether user preference information exists.
  • the electronic device (200) may determine whether user preference information included in context information exists in the memory (230) or an external device.
  • the user preference information may include content information associated with the user's gaze information, preference information input by the user, content information consumed by the user, or user behavior information.
  • the user preference information may be information collected by at least one of information collected by the sensor unit (210), information stored in the memory (230), or information collected by an external device.
  • the user preference information may be information indicating a preference for one character among various characters.
  • the electronic device (200) determines that the user preference information does not exist, it may perform operation 430 of FIG. 4.
  • the electronic device (200) determines that user preference information exists, it may perform operation 1820.
  • the electronic device (200) may determine whether visibility is difficult to secure.
  • the electronic device (200) may obtain visibility information by performing at least one of operation 1020, operation 1030, operation 1040, operation 1050, operation 1060, or operation 1070 of FIG. 10.
  • the electronic device (200) may obtain visibility information indicating that visibility is secured by comparing information related to distance, area, number of users, or age with a threshold.
  • the electronic device (200) may obtain visibility information indicating that visibility is not secured by comparing information related to distance, area, number of users, or age with a threshold.
  • the electronic device (200) may perform operation 1830.
  • the electronic device (200) when the electronic device (200) obtains visibility information indicating that visibility is secured, it can perform operation 430 of FIG. 4.
  • the electronic device (200) may generate second content by enlarging a portion of the first content.
  • the electronic device (200) may generate second content by enlarging a portion of the first content based on user preference information through an artificial intelligence model.
  • the electronic device (200) may select a portion of the first content based on user preference information and generate second content by enlarging the first content to the portion of the first content.
  • the electronic device (200) may generate a prompt to enlarging the first content to the portion of the first content based on user preference information.
  • the electronic device (200) may generate the second content using the generated prompt through a machine-learned artificial intelligence model. For example, as illustrated in FIG.
  • the electronic device (200) may determine a portion of the first content (1911) as an area where one person is located based on user preference information among various people included in the first content (1911) illustrated in situation 1910.
  • the electronic device (200) may generate second content (1921) enlarged to a determined portion of the area, as illustrated in situation 1920.
  • the electronic device (200) may determine a portion of the area as an area where at least one object, either singular or plural, is located, based on user preference information among various objects included in the first content.
  • the electronic device may generate second content enlarged to an area where at least one object, either singular or plural, appears, based on the determined portion of the area.
  • the electronic device (200) may generate second content enlarged to a portion of the first content and then perform operation 450 of FIG. 4 .
  • FIG. 20 is a diagram illustrating a cylindrical electronic device as an example of an electronic device according to one embodiment of the present disclosure.
  • identification number 2010 may be a perspective view of a cylindrical electronic device (200).
  • the electronic device (200) may include a projector (2001) (e.g., 160 of FIG. 1 , 220 of FIG. 2 ).
  • the electronic device (200) may include sensors (2012, 2013) (e.g., 210 of FIG. 2 ) on both sides of the projector (2001).
  • the sensors (2012, 2013) may include image sensors.
  • identification number 2020 may be a front view of the cylindrical electronic device (200).
  • the electronic device (200) may include a sensor (2022) on a side of the projector (2001).
  • the electronic device (200) may include a cylindrical projector (2001) and sensors (2012, 2013, 2022, 2032), and the device portion including the cylindrical projector may rotate to change the projected area.
  • identification number 2030 may be a side view of the cylindrical electronic device (200).
  • the electronic device (200) may include a sensor (2032) on the side of the projector (2001).
  • FIG. 21 is a drawing illustrating a robotic electronic device as an example of an electronic device according to one embodiment of the present disclosure.
  • identification number 2100 may be a perspective view of a robotic electronic device (200).
  • the electronic device (200) may include a projector (2101).
  • the electronic device (200) may include a sensor (2102) at least one of the bottom, top, or side of the projector.
  • the electronic device (200) may be a robotic form including wheels (2103).
  • the electronic device (200) may project a screen onto a projection surface while moving using the wheels.
  • the electronic device (200) may move to avoid obstacles and project the screen.
  • the electronic device (200) may move and project the screen when communication is not smooth, the resolution of the screen to be projected is low, or there is a need to project the screen onto another projection surface.
  • FIG. 22 is a drawing illustrating a box-type electronic device as an example of an electronic device according to one embodiment of the present disclosure.
  • reference numeral 2201 may be a top view of a box-type electronic device (200).
  • the electronic device (200) may include a projector (2211) on a side surface.
  • the electronic device (200) may include a sensor (2221) on a top surface.
  • the electronic device (200) may include sensors on a bottom surface and a side surface.
  • reference numeral 2202 may be a perspective view of a box-type electronic device (200).
  • the electronic device (200) may include a projector (2212) on a side surface and a sensor (2222) next to the projector.
  • the electronic device (200) may be used while being placed on a floor surface, as illustrated in reference numeral 2202.
  • the electronic device (200) may be used while being hung from a ceiling.
  • the sensor (2221, 2222) of the electronic device may include an image sensor.
  • FIG. 23 is a diagram for explaining an example of a method of operating a machine-learned artificial intelligence model in an electronic device according to one embodiment of the present disclosure.
  • the electronic device (200) can use context information (2301) in relation to the operation of the artificial intelligence model.
  • the electronic device (200) can use context information (2301) stored in the memory (230).
  • the electronic device (200) can store the context information (2301) illustrated in FIG. 23 in the memory (230).
  • the context information (2301) can include information collected through the sensor unit (210) or information collected through an external device.
  • the context information (2301) can include environmental information about the surrounding environment, information related to a content display area determined by the electronic device, spatial information, or situational information.
  • the artificial intelligence framework (AI framework) (2302) can receive user input and coordinate and control each component necessary to perform the user's intention based on the user's query.
  • context information (2301) may be transmitted to a prompt generation unit (2312).
  • the prompt generation unit (2312) may be used to generate a prompt suitable for inputting user input into a large language model (LLM) or large multimodal models (LMM).
  • the prompt generation unit (2312) may be an AI component that uses a machine learning algorithm or a neural network to develop better prompts over time.
  • the prompt generation unit (2312) may access a prompt library (2303) containing user preference data, a prompt library, and prompt examples based on the user input to generate a prompt, and transmit the generated prompt to the LLM or LMM.
  • the prompt generation unit (2312) may receive information about the external environment of the electronic device from at least one sensor or at least one external device, and generate a prompt based on the received information about the external environment of the electronic device.
  • the prompt generation unit (2312) may transmit the prompt to the content generation unit (2305).
  • the API/plug-in management component (2322) may communicate with external information when there is a request for additional information when passing user input as input to the generative model.
  • the API/plug-in management component (2322) may establish a channel for communicating with the outside of the AI interface through the API, and may enable access to various data sources (e.g., prompt library (2303)) through the established channel.
  • the API/plug-in management component (2322) may request the application/service component (2304) through the API to perform an action that ultimately performs the user input, rather than an intermediate result, when the application or service needs to perform the action.
  • Information obtained from the outside may be used to generate a prompt in the prompt generation unit (2312) together with the user input, or may be passed as input to the generative model.
  • the projection area and content layout management unit (2332) can fine-tune the output from the generative model. For example, the projection area and content layout management unit (2332) can verify that the LLM and/or the content generated through the LLM is not irrelevant, does not contain biased content, or does not contain harmful content. In addition, the projection area and content layout management unit (2332) can determine the degree to which the result matches the user's desired result and, if necessary, can proceed with additional processing. The projection area and content layout management unit (2332) can additionally configure and provide the user with hints to avoid undesired output. The projection area and content layout management unit (2332) can receive information about the projection area from at least one sensor. The projection area and content layout management unit (2332) can receive information about the projection area and, if a change in the first content is required, can transmit a signal requesting the creation of second content including information about the projection area to the content creation unit (2305).
  • the content generation unit (2305) may include a generative AI model (2315).
  • the content generation unit (2305) may receive a signal and a prompt requesting the generation of second content and generate the second content through a machine-learned AI model.
  • a generative AI model (2315) may generally refer to an artificial intelligence neural network that creates new types of data based on user input information.
  • the generative AI model (2315) may include a model that generates images and/or a model that generates language.
  • Representative models that generate images include a generative adversarial network (GAN) and a variational autoencoder (VAE), and examples include a generative model (e.g., a latent diffusion model) based on diffusion (e.g., stable diffusion, illusion diffusion) that uses a VAE and a transformer structure.
  • GAN generative adversarial network
  • VAE variational autoencoder
  • a model that generates language is a model that is trained to output the most statistically appropriate output value based on an input value, and representative examples include models such as CHAT-GPT 3 and CHAT-GPT 4.
  • models such as CHAT-GPT 3 and CHAT-GPT 4.
  • LMM that can recognize various types of data input such as text, images, and voice and generate new data corresponding to them.
  • Figure 24 is a flowchart (2400) illustrating a process for generating content by an electronic device according to one embodiment of the present disclosure.
  • the operations may be performed sequentially, but are not necessarily sequential.
  • the order of the operations may be changed, and at least two operations may be performed in parallel.
  • the electronic device (200) may determine whether the projection distance is less than or equal to a threshold distance.
  • the electronic device may determine the projection distance and compare it with the threshold distance. For example, the electronic device may determine the distance between the electronic device and the projection area (e.g., 330 of FIG. 3). The electronic device may determine the distance between the user and the projection area (e.g., 350 of FIG. 3). According to one embodiment, if the projection distance is less than or equal to the threshold distance, the electronic device may perform operation 2430. If the projection distance exceeds the threshold distance, the electronic device may perform operation 2421.
  • the electronic device may obtain visibility information.
  • the electronic device may obtain visibility information by considering distance, area, number of users, and/or age information.
  • the electronic device may obtain visibility information indicating that visibility is not secured when the projecting distance is greater than or equal to a threshold, the projecting area is less than or equal to a threshold, the number of users is greater than or equal to a threshold, and/or the age of the users is greater than or equal to a threshold.
  • operation 2421 may correspond to operations 1020, 1030, 1040, 1050, 1060, and/or 1070 of FIG. 10.
  • the electronic device (200) may determine whether a non-displayable area is included in the projecting area.
  • the non-displayable area may include an area containing an obstacle. For example, if an obstacle exists between the user and the projecting area, the electronic device may determine the area containing the obstacle as a non-displayable area. If an obstacle exists between the electronic device and the projecting area, the electronic device may determine the area containing the obstacle as a non-displayable area. For example, as illustrated in FIG. 7, if there is an obstacle (735) in the projecting area (731), the electronic device may determine the area containing the obstacle (735) as a non-displayable area. For example, as illustrated in FIG.
  • the electronic device may determine the area containing the obstacle as a non-displayable area. According to one embodiment, the electronic device may perform operation 2422 if the projection area includes a non-displayable area, and may perform operation 2430 if the projection area does not include a non-displayable area.
  • the electronic device (200) may obtain information about a content display area.
  • the content display area may include an area excluding an area in which light output from a projector is blocked by an obstacle between the projector and the projecting area in the projecting area.
  • the content display area may include an area including an area excluding an obstacle between a user and the projecting area.
  • operation 2422 may be a part of operation 430 of FIG. 4, and the electronic device may determine the content display area according to operation 430.
  • the electronic device (200) may determine whether device status information satisfies a specified condition in operation 2413.
  • the device status information may include movement information of the electronic device, charging information of the electronic device, battery information of the electronic device, and/or noise information.
  • the movement information of the electronic device may be information regarding whether the electronic device is a movable device.
  • the electronic device may be a movable device including wheels (2103) as illustrated in FIG. 21.
  • the charging information of the electronic device may be information indicating whether the electronic device is charging.
  • the battery information of the electronic device may be information indicating the remaining battery of the electronic device.
  • the noise information may be information recognized through the sensor unit (210) of the electronic device (200) and may include a sound signal existing around the electronic device.
  • the electronic device may perform operation 2423, and if it does not satisfy the condition, the electronic device may perform operation 2430.
  • an electronic device may determine that a set condition is met if there is movement information indicating that the electronic device is mobile.
  • the electronic device may determine that a set condition is met if the electronic device is charging and has more than 50% remaining battery life.
  • the electronic device may determine that a set condition is met if noise information exceeds a certain decibel level.
  • the electronic device (200) may acquire content output setting information.
  • the content output setting information may be information related to device status information.
  • the content output setting information may include volume information, screen resolution information, brightness information, or sound information, as information that must be considered when outputting content.
  • the electronic device (200) may determine whether the light-receiving surface analysis information satisfies a specified condition.
  • the electronic device may collect characteristic information regarding the light-receiving surface through the sensor unit (210). For example, the electronic device may collect color information, curvature information, step information, material information, pattern information, and/or texture information of the external projection surface. If the collected light-receiving surface analysis information satisfies a specified condition by comparing it with a threshold value, the electronic device may perform operation 2424. If the light-receiving surface analysis information does not satisfy the specified condition, the electronic device may perform operation 2430.
  • the electronic device (200) can obtain light-receiving surface characteristic information. For example, the electronic device can determine the color of the external projection surface corresponding to the projection area, and if it is not white, the electronic device can obtain color characteristic information of the light-receiving surface. If the color of a part of the external projection surface is different due to a shadow, the electronic device can obtain related light-receiving surface characteristic information. If the external projection surface has a curve or a step, the electronic device can obtain information related to the distance difference that occurs between the electronic device and the projection area due to the curve or the step. If the external projection surface is made of a flashy material or has a pattern, the electronic device can obtain information regarding the material and pattern of the light-receiving surface.
  • the electronic device (200) can determine whether multiple users exist. According to one embodiment, the electronic device can determine whether multiple users exist through user information collected by the sensor unit (210) or an external device. If multiple users exist, the electronic device can perform operation 2425, and if only one user exists, the electronic device can perform operation 2430.
  • the electronic device may acquire user information.
  • the user information may include information about users gazing at the projection area, information about people speaking around the projection area, user motion information, information about the user located farthest away, or user identification information.
  • information about users gazing at the projection area may include information about the number of users among multiple users gazing at the projection area.
  • Information about people speaking around the projection area may include information about a presenter speaking around the projection area.
  • User motion information may include information about a user bending or approaching the projection area, or information about whether the user performs a set motion.
  • the user motion information may include motions related to motions performed by the user due to visibility issues with the projection area.
  • User identification information may include information about some users when content needs to be displayed to some users.
  • the electronic device (200) may determine whether the projection area and the first content area satisfy a specified condition. According to one embodiment, the electronic device may compare the projection area and the first content area to determine whether the projection area has a larger area than the first content area. If the projection area is larger than the first content area, the electronic device may perform operation 2426. If the projection area and the first content area have the same area, the electronic device may perform operation 2430.
  • the electronic device may acquire content information.
  • the content information may include information related to the original content, i.e., first content.
  • the content information may include content information required to generate an outpainting image, an outpainting video, an ambient light image, or an ambient light video.
  • the electronic device may generate second content based on the acquired information.
  • the electronic device may generate the second content through a machine-learned artificial intelligence model based on the acquired information.
  • the electronic device may generate content that summarizes the text by reducing the amount of text for the first content consisting of text.
  • the electronic device may generate content that summarizes the text for the first content consisting of text and enlarges the font so that visibility can be secured.
  • the electronic device may generate content (1203) that summarizes the text for the first content (1201) and enlarges the font.
  • the electronic device may generate content that enlarges a portion of the first content consisting of an image or video so that a portion determined to be important is included.
  • the electronic device may determine the portion determined to be important based on at least one of the area, amount, size, position, or setting within the first content.
  • the electronic device may generate second content that reflects the user's preference based on the visibility information indicating that visibility is not secured.
  • the electronic device (200) can generate content in the form of enlarged or cropped images, videos, or texts centered on relevant content, reflecting the user's preferences.
  • the electronic device can generate content that summarizes the user's preferred content, reflecting the user's preferences for text content. As illustrated in FIG. 19, the electronic device can generate content by enlarging the first content to fit a selected area (1912) based on the user's preference information.
  • an electronic device when it acquires information about a content display area, it can generate second content based on the content display area. For example, the electronic device can generate second content according to operation 440 of FIG. 4. For example, the electronic device can generate second content suitable for the content display area (733), as shown in FIG. 7.
  • an electronic device can generate second content based on content output setting information. For example, the electronic device can change the location of the electronic device based on information that the electronic device is capable of moving on its own and generate content based on the changed location. If content needs to be generated for a specific person, the electronic device can generate content tailored to the specific person based on the movement of the electronic device. For example, the electronic device can generate content identical to the original content based on information that the electronic device is charging. The electronic device can generate content with the resolution, brightness, or volume adjusted based on information that the electronic device's battery is insufficient. The electronic device can generate content identical to the original content based on information that the electronic device's battery is sufficient. The electronic device can analyze ambient noise signals and generate content with a volume and sound that can be heard by the user even in a noisy environment. The noise-related information can include voice signals collected from the external environment, excluding voice signals included in the content of the electronic device.
  • the electronic device can generate second content based on the light-receiving surface characteristic information. For example, if the color of the external projection surface is not white, the electronic device can generate content in which the brightness, contrast, or color temperature of the content is adjusted so that the color of the original content is expressed. If the color of some areas of the external projection surface is different due to shadows, the electronic device can generate content with reduced contrast by adjusting the brightness, contrast, or color temperature of those areas with different colors. If the external projection surface has a curve or a step, the electronic device can generate content that enlarges content in a distant area in proportion to the distance difference between the electronic device and the projection area due to the curve or step.
  • the electronic device can generate content in which the clarity of content in a distant area is increased in proportion to the distance difference between the electronic device and the projection area due to the curve or step. If the external projection surface is made of a flashy material, the electronic device can generate content with enhanced background color contrast, and can generate content using black and white or outline processing for video or image content. If the external projection surface has a pattern, the electronic device can generate content with enhanced background color contrast and outline or black and white processing.
  • an electronic device can generate secondary content based on user information. For example, the electronic device can generate content that allows multiple users to view the content. As described in the description of FIG. 11, if there are multiple users (1103), the electronic device can change the font and generate content that summarizes the content so that all users can view the content. If there is only one user, the electronic device can generate content identical to the original content, as in situation 1101 of FIG. 11. For example, if content needs to be displayed to some users based on user identification information, the electronic device can change the location of the content, adjust the size of the content, or generate content so that it is easily visible to some users based on information about the users to whom the content needs to be displayed.
  • the electronic device can generate content that is displayed according to the user's changed location based on user motion information. For example, the electronic device can recognize a user's motion of bending toward the projection area or a user's motion of bending forward due to not recognizing the content, and generate content with an adjusted location and size that can alleviate user discomfort. For example, as illustrated in FIGS. 16 and 17, the electronic device can use information related to a location where the user is looking to generate content in a sub-area (1713) that is replaced with content related to the main area (1711), which is an area that includes the location where the user is looking.
  • an electronic device can generate second content using content information. For example, as illustrated in FIGS. 14 and 15 , if there is a blank space (1515) in addition to the original content, the electronic device can generate content for the second area (1515), which is the blank space.
  • the electronic device can generate the second content by generating an outpainting image (or video) or an ambient light image (or video) in the second area using the first content of the first area (1513).
  • An electronic device (e.g., an electronic device (101) of FIG. 1, an electronic device (200) of FIG. 2) according to various embodiments of the present disclosure may include a memory (e.g., a memory (130) of FIG. 1, a memory (230) of FIG. 2)) for storing instructions, a projector (e.g., a display module (160) of FIG. 1, a projector (220) of FIG. 2)) for projecting a screen including content onto a projection surface of an external object, at least one sensor (e.g., a sensor module (176) of FIG. 1, a sensor unit (210) of FIG.
  • a memory e.g., a memory (130) of FIG. 1, a memory (230) of FIG. 2)
  • a projector e.g., a display module (160) of FIG. 1, a projector (220) of FIG. 2
  • at least one sensor e.g., a sensor module (176) of FIG. 1, a sensor unit (210) of FIG.
  • the at least one processor may execute an application that, when the instructions are executed, causes a screen including first content to be displayed through the projector (e.g., the display module (160) of FIG. 1, the projector (220) of FIG. 2). Based on the executed application, a projection area including an area where light emitted from the projector (e.g., the display module (160) of FIG. 1, the projector (220) of FIG.
  • a content display area may be determined within the projection area based on the environmental information.
  • data including second content generated through an artificial intelligence model machine-learned from content information related to the first content may be acquired based on context information including at least one of the content display area or the environmental information.
  • a screen including the second content may be projected onto the projection surface through the projector.
  • the at least one processor may, when the instructions are executed, generate a prompt including information related to a property of the second content based on at least one of the context information and the content information, and input the prompt to the machine-learned artificial intelligence model to obtain data including the second content.
  • the context information may further include context information related to a situation in which the projector projects a screen, and the context information may include information acquired from the memory or received from an external device.
  • the at least one processor e.g., the processor (120) of FIG. 1, the processor (240) of FIG. 2 may acquire data including the second content based on the context information including the context information when the instructions are executed.
  • the at least one sensor e.g., sensor module (176) of FIG. 1, sensor unit (210) of FIG. 2 may include an image sensor, and the environmental information may be identified through image recognition of an image acquired through the image sensor and may include spatial information related to an object located around the electronic device.
  • the spatial information is information about the external environment of the electronic device detected through the image sensor, and may include at least one of projecting distance information, projecting area information, location information of the user, or obstacle information.
  • the situation information may include at least one of noise information, user age information, user preference information, or battery information.
  • the second content may include a summary text that reduces the number of characters by summarizing text included in the first content based on the context information.
  • the second content may include at least one of an enlarged image obtained by enlarging an image included in the first content based on the context information, a reduced image obtained by reducing an image included in the first content, a composite image obtained by synthesizing two or more images included in the first content, and a predicted image generated based on a prediction result of the artificial intelligence model from the first content.
  • the at least one processor e.g., the processor (120) of FIG. 1, the processor (240) of FIG. 2) can identify an obstacle disposed between the projection area and the projector, determine the content display area by excluding an area including the identified obstacle from the projection area, change the content display area when the obstacle moves after projecting a screen including the second content onto the projection surface, and obtain data including third content different from the second content from content information related to the first content based on the changed content display area and project a screen including the third content onto the projection surface.
  • the context information may include at least one of a distance between the projecting area and a user, an area of the projecting area, the number of users, or age information of the users.
  • the at least one processor e.g., the processor (120) of FIG. 1, the processor (240) of FIG. 2 may, when the instructions are executed, obtain visibility information indicating whether the user can view the first content based on at least one of the distance, the area, the number of users, or the age information of the users, and may generate the prompt using the obtained visibility information.
  • the at least one processor may obtain visibility information indicating that the first content is not viewable by the user based on at least one of the following: when the distance is equal to or greater than a first threshold, when the area is equal to or less than a second threshold, when the number of users is equal to or greater than a third threshold, or when age information of the users is equal to or greater than a fourth threshold, when the instructions are executed.
  • the first content may include a first image displayed in a first area within the projecting area
  • the content display area may include the first area and a second area different from the first area
  • the second content may include the first image arranged in the first area and a second image generated by the artificial intelligence model and displayed in the second area.
  • the context information may include information about a location at which a user gazes within the projecting area
  • the first content may include a third image displayed in a third area within the projecting area and a fourth image displayed in a fourth area different from the third area
  • the content display area may include the third area including the location at which the user gazes and the fourth area
  • the second content may include the third image arranged within the third area
  • the fifth image may be an image related to the third image.
  • the context information may include situation information including user preference information.
  • the at least one processor e.g., the processor (120) of FIG. 1, the processor (240) of FIG. 2 may, when the instructions are executed, generate the prompt based on the user preference information, and, using the prompt, acquire second content generated by expanding a portion of the first content selected based on the user preference information through the machine-learned artificial intelligence model.
  • the machine-learned artificial intelligence model may have learned data to generate content displayed in the content display area by changing content placed in the projection area.
  • a method for operating an electronic device may include an operation of executing an application that displays a screen including first content through a projector that projects a screen including content onto a projection surface of an external object, an operation of determining a projection area including an area where light emitted from the projector is irradiated onto the projection surface based on the executed application, an operation of determining a content display area within the projection area based on environmental information acquired by at least one sensor configured to acquire environmental information about a surrounding environment in which the projector projects a screen, an operation of acquiring data including second content generated through a machine-learned artificial intelligence model from content information related to the first content based on context information including at least one of the content display area or the environmental information, and an operation of projecting a screen including the second content onto the projection surface through the projector.
  • an operation of generating a prompt including information related to a property of the second content based on at least one of the context information and the content information, and inputting the prompt into the machine-learned artificial intelligence model to obtain data including the second content may be included.
  • the method may include an operation of identifying an obstacle placed between the projection area and the projector, an operation of determining the content display area by excluding an area including the identified obstacle from the projection area, an operation of changing the content display area when the obstacle moves after projecting a screen including the second content on the projection surface, and an operation of acquiring data including third content different from the second content from content information related to the first content based on the changed content display area and projecting a screen including the third content on the projection surface.
  • the context information may include at least one of a distance between the projecting area and the user, an area of the projecting area, the number of the users, or age information of the users, and may include an operation of obtaining visibility information indicating that the first content is not viewable by the user based on at least one of a case in which the distance is greater than or equal to a first threshold, a case in which the area is less than or equal to a second threshold, a case in which the number of the users is greater than or equal to a third threshold, or a case in which the age information of the users is greater than or equal to a fourth threshold, and may include an operation of generating the prompt using the obtained visibility information.
  • the first content may include a first image displayed in a first area within the projecting area
  • the content display area may include the first area and a second area different from the first area
  • the second content may include an operation including a first image disposed in the first area and a second image generated by the artificial intelligence model and displayed in the second area.
  • text, images, or videos included in content to be projected through an electronic device are generated and projected according to the surrounding environment of the electronic device, thereby enabling all users viewing the projected content to consume the content without inconvenience. Furthermore, even in situations where the electronic device cannot ensure visibility of the text content, the text can be summarized and projected, thereby maintaining the information transmission effect.
  • the electronic device can generate content according to an area excluding obstacles, thereby enabling efficient information transmission and screen utilization even in environments with obstacles.
  • the electronic device can generate content according to the projection area, thereby increasing screen immersion and enabling efficient screen utilization.
  • a computer-readable storage medium storing one or more programs (software modules) may be provided.
  • the one or more programs stored in the computer-readable storage medium are configured for execution by one or more processors within an electronic device.
  • the one or more programs include instructions that cause the electronic device to execute methods according to embodiments described in the claims or specification of the present disclosure.
  • a function or operation performed by an electronic device may be performed by one or more processors executing one or more instructions stored in a memory.
  • the function or operation of the electronic device mentioned in the present disclosure may be performed by one processor executing one or more instructions, or may be performed by a combination of multiple processors executing one or more instructions.
  • the processor mentioned in the present disclosure may be understood to include a circuit for performing an operation or controlling other components of the electronic device.
  • the one or more processors may include at least one of a central processing unit (CPU), a microprocessor unit (MPU), an application processor (AP), a communication processor (CP), a neural processing unit (NPU), a system on chip (SoC), an application-specific integrated circuit (ASIC), or an integrated circuit (IC) configured to execute one or more instructions.
  • the one or more processors may be configured to perform the operations of the electronic device described above.
  • a program (software module, software) may be stored in a non-volatile memory including a random access memory (RAM), a flash memory, a read only memory (ROM), an electrically erasable programmable read only memory (EEPROM), a magnetic disc storage device, a compact disc ROM (CD-ROM), digital versatile discs (DVDs) or other forms of optical storage devices, a magnetic cassette. Or, it may be stored in a memory formed by a combination of some or all of these.
  • the memory may be formed by a single storage medium, or may be formed by a combination of a plurality of storage media.
  • the one or more commands may be stored in a single storage medium, or may be distributed and stored in a plurality of storage media.
  • terms such as “part”, “module”, etc. may refer to a hardware component such as a processor or circuit, and/or a software component executed by a hardware component such as a processor.
  • a “component” or “module” may be implemented by a program stored in an addressable storage medium and executed by a processor.
  • a “component” or “module” may be implemented by components such as software components, object-oriented software components, class components, and task components, as well as processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
  • “comprising at least one of a, b, or c” may mean “comprising only a, including only b, including only c, or including a combination of two or more (including a and b, including b and c, including a and c, or including all of a, b, and c).

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Un dispositif électronique selon divers modes de réalisation comprend : une mémoire stockant des instructions ; un projecteur qui projette un écran comprenant du contenu sur une surface de projection d'un objet externe ; au moins un capteur configuré pour acquérir des informations environnementales sur un environnement environnant dans lequel le projecteur projette l'écran ; et le ou les processeurs. Les instructions peuvent être exécutées par le ou les processeurs pour amener le dispositif électronique à : exécuter une application pour afficher un écran comprenant un premier contenu à travers le projecteur ; déterminer, sur la base de l'application exécutée, une zone de projection comprenant une zone dans laquelle la lumière émise par le projecteur est irradiée sur la surface de projection ; déterminer une zone d'affichage de contenu dans la zone de projection sur la base des informations d'environnement ; acquérir des données comprenant un deuxième contenu généré par un modèle d'intelligence artificielle entraînée par machine, à partir d'informations de contenu liées au premier contenu sur la base d'informations de contexte comprenant au moins le ou les éléments de la zone d'affichage de contenu ou les informations d'environnement ; et projeter un écran comprenant le deuxième contenu sur la surface de projection par l'intermédiaire du projecteur.
PCT/KR2025/005797 2024-06-27 2025-04-29 Dispositif électronique comprenant un projecteur et procédé de fonctionnement correspondant Pending WO2026005253A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR20240084221 2024-06-27
KR10-2024-0084221 2024-06-27
KR1020240088096A KR20260001430A (ko) 2024-06-27 2024-07-04 프로젝터를 포함하는 전자 장치 및 그 동작 방법
KR10-2024-0088096 2024-07-04

Publications (1)

Publication Number Publication Date
WO2026005253A1 true WO2026005253A1 (fr) 2026-01-02

Family

ID=98222187

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2025/005797 Pending WO2026005253A1 (fr) 2024-06-27 2025-04-29 Dispositif électronique comprenant un projecteur et procédé de fonctionnement correspondant

Country Status (1)

Country Link
WO (1) WO2026005253A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180060972A (ko) * 2016-11-29 2018-06-07 삼성전자주식회사 전자 장치 및 이의 컨텐츠 요약 방법
KR20220144346A (ko) * 2018-02-01 2022-10-26 삼성전자주식회사 컨텍스트에 따라 이벤트의 출력 정보를 제공하는 전자 장치 및 이의 제어 방법
JP2023068493A (ja) * 2021-11-02 2023-05-17 Necプラットフォームズ株式会社 移動式投射装置、移動式投射システム、移動式投射装置の制御方法、及び、制御プログラム
KR20240000330A (ko) * 2022-06-23 2024-01-02 삼성전자주식회사 영상을 투사하는 외부 전자 장치와 연동하여 영상 컨텐츠를 제공하는 전자 장치 및 이의 제어 방법
JP2024032039A (ja) * 2022-08-29 2024-03-12 カシオ計算機株式会社 制御装置、投影システム、投影領域決定方法及びプログラム

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180060972A (ko) * 2016-11-29 2018-06-07 삼성전자주식회사 전자 장치 및 이의 컨텐츠 요약 방법
KR20220144346A (ko) * 2018-02-01 2022-10-26 삼성전자주식회사 컨텍스트에 따라 이벤트의 출력 정보를 제공하는 전자 장치 및 이의 제어 방법
JP2023068493A (ja) * 2021-11-02 2023-05-17 Necプラットフォームズ株式会社 移動式投射装置、移動式投射システム、移動式投射装置の制御方法、及び、制御プログラム
KR20240000330A (ko) * 2022-06-23 2024-01-02 삼성전자주식회사 영상을 투사하는 외부 전자 장치와 연동하여 영상 컨텐츠를 제공하는 전자 장치 및 이의 제어 방법
JP2024032039A (ja) * 2022-08-29 2024-03-12 カシオ計算機株式会社 制御装置、投影システム、投影領域決定方法及びプログラム

Similar Documents

Publication Publication Date Title
WO2020130691A1 (fr) Dispositif électronique et procédé pour fournir des informations sur celui-ci
WO2022215910A1 (fr) Procédé de partage d'écran et dispositif électronique associé
WO2022177166A1 (fr) Procédé de commande de fréquence de rafraîchissement, et dispositif électronique prenant en charge celui-ci
WO2026005253A1 (fr) Dispositif électronique comprenant un projecteur et procédé de fonctionnement correspondant
WO2023282458A1 (fr) Dispositif électronique et son procédé de fonctionnement
WO2023008854A1 (fr) Dispositif électronique comprenant un capteur optique intégré dans une unité d'affichage
WO2022114809A1 (fr) Dispositif électronique de fourniture de visioconférence et procédé associé
WO2025263768A1 (fr) Dispositif électronique comprenant un écran transparent et procédé de fonctionnement associé
WO2025155065A1 (fr) Dispositif électronique à porter sur soi pour afficher une image de réalité étendue, procédé de fonctionnement et support de stockage
WO2026038686A1 (fr) Appareil et procédé d'affichage d'autocollant comprenant au moins une invite
WO2025193001A1 (fr) Procédé de fourniture de vidéo, dispositif électronique pour sa prise en charge et support de stockage
WO2025230110A1 (fr) Dispositif électronique permettant d'afficher une interface utilisateur et son procédé de fonctionnement
WO2026089202A1 (fr) Dispositif électronique, procédé et support de stockage non transitoire lisible par ordinateur pour changer la posture d'un objet dans une image
WO2026049313A1 (fr) Procédé d'affichage d'un écran d'accueil sur la base d'une catégorie d'applications d'un dispositif électronique et dispositif électronique exécutant ce procédé
WO2025263798A1 (fr) Dispositif électronique pour générer un contenu par identification d'un schéma d'utilisateur et son procédé de fonctionnement
WO2024029740A1 (fr) Procédé et dispositif de production de données de dessin en utilisant un dispositif d'entrée
WO2026049449A1 (fr) Dispositif électronique et procédé d'amélioration d'image
WO2025105807A1 (fr) Dispositif électronique comprenant un projecteur pour émettre un faisceau, et son procédé de fonctionnement
WO2025041978A1 (fr) Procédé et dispositif de commande d'appareil photo selon un changement d'espace
WO2026043090A1 (fr) Dispositif électronique, procédé, et support de stockage non transitoire lisible par ordinateur pour traiter une image
WO2025264081A1 (fr) Dispositif électronique pour afficher un contenu, son procédé de fonctionnement et support d'enregistrement
WO2025018665A1 (fr) Dispositif électronique portable pour ajuster la luminance d'une lumière, son procédé de fonctionnement et support d'enregistrement
WO2026084388A1 (fr) Dispositif électronique pour régler le volume audio d'une vidéo, son procédé de fonctionnement et support de stockage non transitoire lisible par ordinateur
WO2026043034A1 (fr) Dispositif électronique, procédé et support de stockage non transitoire lisible par ordinateur pour éditer un écran d'accueil
WO2024043681A1 (fr) Dispositif électronique monté sur casque pour convertir un écran d'un dispositif électronique en un environnement de réalité étendue, et dispositif électronique connecté à celui-ci

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25827262

Country of ref document: EP

Kind code of ref document: A1