CN116830211A - Machine learning image recognition for classifying conditions based on ventilation data - Google Patents
Machine learning image recognition for classifying conditions based on ventilation data Download PDFInfo
- Publication number
- CN116830211A CN116830211A CN202180083015.7A CN202180083015A CN116830211A CN 116830211 A CN116830211 A CN 116830211A CN 202180083015 A CN202180083015 A CN 202180083015A CN 116830211 A CN116830211 A CN 116830211A
- Authority
- CN
- China
- Prior art keywords
- data
- ventilation
- image
- human
- ventilation data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present technology relates to methods and systems for identifying conditions from ventilation data. The method may include: acquiring ventilation data of a patient ventilated during a period of time; generating an image based on the acquired ventilation data; providing the generated image as input to a trained machine learning model, wherein the trained machine learning model is trained based on images having the same type as the generated image; and generating a predicted condition of the patient based on output from the trained machine learning model. The image may be generated by storing ventilation data as pixel channel values to generate a human-non-interpretable image.
Description
Technical Field
Medical ventilator systems have long been used to provide ventilation and supplemental oxygen support to patients. These ventilators typically include a connection for pressurized gas (air, oxygen) delivered to the patient through a conduit or tube. Because each patient may require a different ventilation strategy, modern ventilators can be tailored to the specific needs of the individual patient. Determining the specific needs of an individual patient may be based on the clinical condition of the patient.
It is with respect to these and other general considerations that the various aspects of the disclosure have been made. Furthermore, while relatively specific problems may be discussed, it should be understood that the examples should not be limited to solving the specific problems identified in the background or elsewhere in this disclosure.
Disclosure of Invention
The present technology relates to predicting clinical condition of ventilated patients (such as patients) by using an image classification machine learning model. In one aspect, the present technology relates to a computer-implemented method for identifying a condition from ventilation data. The method comprises the following steps: acquiring ventilation data of a patient ventilated during a period of time; converting the ventilation data into a human-non-interpretable image having a plurality of pixels, wherein each of the pixels of the human-non-interpretable image is defined by at least a first channel, and a first channel value of a first pixel represents at least a portion of the value of the ventilation data; providing the human-non-interpretable image as an input to a trained machine learning model; and generating a predicted condition of the patient based on output from the trained machine learning model.
In an example, the ventilation data includes at least one of pressure data, flow data, or volume data. In another example, the first pixel is further defined by a second channel value and a third channel value. In a further example, the second channel value represents another portion of the value of the ventilation data and the third channel value represents a sign of the ventilation data. In yet another example, the value of the ventilation data represented by the first channel is a first value of ventilation data at a first point in time; the second channel value represents a second value of ventilation data at a second point in time; and the third channel value represents a third value of ventilation data at a third point in time. In yet another example, the first pixel is further defined by a fourth channel value, and the fourth channel value represents a fourth value of ventilation data at a fourth point in time. In yet another example, the method further includes obtaining control parameters during the time period, wherein the control parameters are also converted into the human-non-interpretable image.
In another example, the human-non-interpretable image has a size of less than 4000 pixels. In a further example, the human-non-interpretable image has a size of less than 1600 pixels. In yet another example, the ventilation data includes pressure data sampled for at least 1000 points in time and flow data sampled for at least 1000 points in time. In yet another example, the machine learning model is a convolutional neural network. In yet another example, the method further includes displaying the predicted condition on a display of the ventilator. In another example, the predicted condition is at least one of asthma, acute Respiratory Distress Syndrome (ARDS), emphysema, or Chronic Obstructive Pulmonary Disease (COPD).
In another aspect, the present technology relates to a method for identifying a condition from ventilation data. The method comprises the following steps: acquiring ventilation data and control parameters of ventilation of a patient during a period of time; converting the ventilation data into a human-interpretable image having a defined layout, the defined layout comprising a plurality of layout parts including a first part for a graphical representation of ventilation data and a second part for a graphical representation of control parameters; providing the human-interpretable image as an input to a trained machine learning model, wherein the trained machine learning model is trained based on images having the defined layout; and generating a predicted condition of the patient based on output from the trained machine learning model.
In an example, the ventilation data includes at least one of pressure data, flow data, or volume data. In another example, the first portion includes a graphical representation of the pressure data in a first color and a graphical representation of the flow rate in a second color. In yet another example, the graphical representation of pressure data is a graph of pressure data versus time over the period of time. In a further example, the graphical representation of the control parameter is represented as bars, wherein the height of each bar represents the value of the control parameter. In yet another example, the defined layout includes a third portion of the scale for the ventilation data.
In another example, the machine learning model is a neural network. In yet another example, the graphical representation of ventilation data includes at least one of a scatter plot, a straight line plot, a spiral plot, a heat plot, a polar plot, or a bar plot. In a further example, the human-interpretable image is generated in response to activating a predictive mode of a ventilator.
In another aspect, the present technology relates to a computer-implemented method for identifying a condition from ventilation data. The method comprises the following steps: obtaining ventilation data of a patient ventilated during a period of time, wherein the ventilation data includes at least pressure data and flow data; and converting the ventilation data into a human-non-interpretable image having a plurality of pixels. The plurality of pixels includes: a first pixel having a first channel value representing at least a portion of a value of the pressure data at a first point in time; and a second pixel having a first channel value representing at least a portion of the value of the flow data at the first point in time. The method further comprises: providing the human-non-interpretable image as an input to a trained machine learning model; and generating a predicted condition of the patient based on output from the trained machine learning model.
In an example, the plurality of pixels includes a third pixel having a first channel, the first channel representing a value of the control parameter at the first point in time. In another example, the first pixel includes a second channel value representing another portion of the value of the pressure data at the first point in time. In a further example, the first pixel includes a third channel value representing a sign of a value of the pressure data at the first point in time.
In another aspect, the present technology relates to a computer-implemented method for identifying a condition from ventilation data. The method comprises the following steps: receiving ventilation data including pressure or flow data of a ventilated patient; generating an image comprising pixels having pixel channels, wherein the ventilation data is contained in the pixel channels; providing the generated image as an input to a trained machine learning model to produce an output; classifying a clinical condition of the patient based on the output; and adjusting a ventilation setting or display according to the clinical condition of the patient.
In an example, the ventilation data includes at least one of pressure data, flow data, or volume data. In a further example, the image is a human-non-interpretable image. In yet another example, the method further comprises displaying a clinical condition of the patient and a prompt confirming or rejecting the clinical condition. In yet another example, the method further comprises: receiving a response to the prompt; and enhancing the trained machine learning model based on the received response to the prompt.
In another aspect, the present technology relates to a medical system comprising: a medical ventilator; a processor; a trained machine learning model; and a memory storing instructions that when executed by the processor cause the system to perform a set of operations. The set of operations includes: receiving ventilation data of a patient from the ventilator; generating an image comprising the ventilation data stored in one or more pixel channels of pixels of the image; providing the generated image as an input to the trained machine learning model; outputting a predicted condition of the patient from the trained machine learning model; and adjusting a ventilation setting or display on the ventilator based on the predicted condition.
In an example, the processor and the memory are housed within the ventilator. In another example, the system further includes a server in communication with the ventilator, wherein the processor and the memory are housed within the server. In further examples, the ventilation data includes at least one of pressure data, flow data, or volume data.
In another aspect, the present technology relates to a computer-implemented method for identifying a condition from ventilation data. The method comprises the following steps: acquiring ventilation data of a patient ventilated during a period of time; generating an image based on the acquired ventilation data; providing the generated image as input to a trained machine learning model, wherein the trained machine learning model is trained based on images having the same type as the generated image; and generating a predicted condition of the patient based on output from the trained machine learning model.
In an example, the ventilation data includes at least one of pressure data, flow data, or volume data. In another example, the image is a human interpretable image. In yet another example, the image is a human-non-interpretable image.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. It is to be understood that both the foregoing general description and the following detailed description are explanatory and are intended to provide further aspects and examples of the disclosure as claimed.
Drawings
The present patent or application contains at least one color drawing. The patent office will provide copies of this patent or patent application publication with color drawing(s) as required and at the expense of the necessary fee. The accompanying drawings, which form a part hereof, illustrate aspects of the systems and methods described below, and are not meant to limit the scope of the disclosure in any way, which is to be based on the claims.
Fig. 1 is a diagram illustrating an example of a medical ventilator connected to a human patient.
Fig. 2 is a front view of an example display screen.
FIG. 3A depicts an example image processing and condition prediction system.
Fig. 3B depicts a schematic diagram showing features of a server.
Fig. 4A to 4D depict examples of human-interpretable images.
Fig. 4E depicts an example defined layout of a human interpretable image.
Fig. 4F depicts another example defined layout of a human interpretable image.
Fig. 5A-5B depict example human non-interpretable images.
Fig. 6A-6B depict another type of example human non-interpretable image.
Fig. 7 depicts an example method for operating a ventilator.
FIG. 8 depicts an example method for training a machine learning model.
While examples of the present disclosure are applicable to various modifications, specific aspects are shown by way of example in the drawings and are described in detail below. It is not intended to limit the scope of the disclosure to the specific aspects described. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the disclosure and the appended claims.
Detailed Description
As briefly discussed above, medical ventilators are used to provide breathing gas to patients who are unable to breathe adequately. The ventilation mode and/or ventilation settings may be set based on the particular patient and any clinical condition of the patient, such as asthma, acute Respiratory Distress Syndrome (ARDS), pulmonary emphysema or Chronic Obstructive Pulmonary Disease (COPD), etc. By properly adjusting the ventilator settings based on the patient's clinical condition, the ventilator may better support the patient, and the patient may be more likely to recover faster or disengage the ventilator faster. However, identifying the clinical condition of a patient is difficult and is typically based on physical examination of the patient by a doctor or respiratory therapist. In some cases, internal imaging procedures such as x-ray examination are also used by doctors to identify the clinical condition of the patient.
The present technology provides methods and systems that can automatically predict a clinical condition of a patient based on ventilation data of the patient, such as pressure data, flow data, and/or volume data measured or generated during ventilation of the patient. The present technique may also generate recommended settings for the ventilator based on the predicted clinical condition. The present technology is capable of predicting clinical conditions using machine learning and image recognition techniques. The systems and methods described herein generate images from ventilation data of a patient. These images may be human interpretable images that include a graph of data versus time. The images may also include human non-interpretable images including ventilation data stored in the pixel channels themselves. For example, a pixel may have three channels (e.g., a red channel, a blue channel, and a green channel) that define its display properties. The ventilation data may be stored directly in one or more channels of pixels forming a human-non-interpretable image. The generated images may then be provided to a trained machine learning model, such as a neural network, that has been trained on a series of previous images corresponding to known clinical conditions. Thus, the trained machine learning model is able to classify newly received images for a particular patient and provide predictions of the clinical condition of the patient. By first converting the ventilation data of the patient into an image, the present technique is able to identify a condition (clinical condition of the patient) using image recognition and image classification techniques, rather than based on an image of the patient, such as an x-ray examination of the patient.
Fig. 1 is a diagram showing an example of a medical ventilator 100 connected to a patient 150. Ventilator 100 may provide positive pressure ventilation to patient 150. Ventilator 100 includes a pneumatic system 102 (also referred to as pressure generating system 102) for circulating breathing gas to and from a patient 150 via a ventilation tube (also referred to as a breathing circuit) 130. The ventilator tube 130 couples the patient 150 to a pneumatic system via a patient interface 180. Patient interface 180 may be invasive (e.g., an endotracheal tube, as shown) or non-invasive (e.g., a nose or mask, nasal cannula). The ventilator 100 controls the flow of gas into the ventilation tube 130 by controlling (regulating, opening or closing) an inhalation flow valve or blower, which may be part of the inhalation module 104. Additionally, a humidifier may be placed along the ventilation tube 130 to humidify the breathing gas delivered to the patient 150. Pressure and flow sensors may be located at or near the inhalation module 104 and/or the exhalation module 108 to measure flow and pressure.
The breather tube 130 may be a dual branch circuit (shown) or a single branch circuit (also referred to as a single branch, with only the suction side). In a dual branch example, a Y-fitting 170 may be provided to couple the patient interface 180 to the inhalation and exhalation branches 134, 132 of the snorkel 130.
The pneumatic system 102 may have various configurations. In this example, system 102 includes an exhalation module 108 coupled with an exhalation branch 132 and an inhalation module 104 coupled with an inhalation branch 134. A compressor 106 or blower or other source of pressurized gas (e.g., air, oxygen, and/or helium) is coupled with the inhalation module 104 to provide breathing gas to the inhalation limb 134. The pneumatic system 102 may include various other components, including mixing modules, valves, sensors, tubing, accumulators, filters, etc., which may be internal or external sensors of the ventilator (and may be communicatively coupled to, or capable of communicating with, the ventilator).
The controller 110 is operatively coupled with the pneumatic system 102, the signal measurement and acquisition system, and the user interface 120. The controller 110 may include hardware memory 112, one or more processors 116, storage 114, and/or other components of the type found in command and control computing devices. In the depicted example, the user interface 120 includes a display 122 that may be touch-sensitive and/or voice-activated such that the display 122 can function as both an input device and an output device to enable a user to interact with the ventilator 100 (e.g., change ventilation settings, select an operational mode, view monitored parameters, etc.).
Memory 112 includes a non-transitory computer readable storage medium that stores software that is executed by processor 116 and controls the operation of ventilator 100. In an example, the memory 112 includes one or more solid state storage devices, such as a flash memory chip. In alternative examples, memory 112 may be a mass storage device coupled to processor 116 through a mass storage controller (not shown) and a communication bus (not shown). Although the description of computer-readable media contained herein refers to solid state storage, it should be appreciated by those skilled in the art that computer-readable storage media can be any available media that can be accessed by the processor 116. That is, computer-readable storage media includes non-transitory, volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. For example, computer-readable storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
The ventilator may also include one or more image conversion instructions 118. The one or more image conversion instructions 118 include instructions that cause the ventilator 100 to generate images discussed below, such as human-interpretable images and human-non-interpretable images. Additionally, the ventilator may also include one or more Machine Learning (ML) processing instructions 119. The ML processing instructions 119 may include instructions for processing the generated image using a trained ML model, such as a trained neural network. In some examples, the trained ML model may be stored on the ventilator, and classification of new images and prediction of patient condition may be performed at least partially or entirely on the ventilator itself. In other examples, the ventilator may communicate (such as via a wired or wireless distributed network) with other computing devices that perform at least some of the operations described herein.
Fig. 2 is a front view of a display screen 202 coupled to a ventilator according to an embodiment. The display 202 may be mounted to the ventilator or a separate screen, tablet or computer in communication with the ventilator. The display 202 presents useful information to the user and receives user input. The display presents a user interface 204 that includes a Graphical User Interface (GUI) 204.GUI 204 may be an interactive display, such as a touch screen or other display, and may provide various windows (i.e., visual areas) including elements for receiving user input and interface command operations and for displaying ventilation information (e.g., ventilation data such as pressure, volume, and flow waveforms, inspiration time, PEEP, baseline levels, etc.) as well as control information (e.g., alarms, patient information, control parameters, modes, etc.). These elements may include controls, graphics, charts, toolbars, input fields, icons, and the like. The display 202 may also include physical buttons or input elements such as dials, wheels, switches, buttons, and the like.
In the interface 204 depicted in fig. 2, a plurality of graphs are displayed on the interface 204, including a pressure-time graph 206, a flow-time graph 208, a volume-time graph 210, a volume-pressure loop 212, and a flow-pressure loop 214. The pressure versus time graph 206 plots measured or determined pressure versus time. The pressure may be measured from a pressure sensor in the ventilator or within the ventilation circuit. The pressure values may include inhalation pressure, Y-piece pressure and/or exhalation pressure, and other pressure types. The flow-time graph 208 plots measured or determined flow versus time. Flow may be measured from a flow sensor in the ventilator or within the ventilation circuit. The flow values include inspiratory flow, expiratory flow, and/or net flow, as well as other flow types. The volume-time graph 210 plots the determined volume versus time. The volume may be determined or calculated based on the measured flow value. The volume-pressure loop 212 is the pressure versus time during patient inspiration and expiration. The flow-volume loop is a graph of inspiratory and expiratory flow (on the y-axis) versus volume (on the x-axis).
The interface 204 may also include additional representations of data about the patient, ventilation settings, and/or control parameters. For example, the patient data segment 216 may include information about or based on the patient, such as whether the patient is an infant, pediatric, or adult patient. Patient data may also include patient mass or predicted weight (PBW), absolute tidal volume limits based on PBW, and other types of data. In some examples, patient data may be considered control parameters because patient type and patient's PBW may be utilized in setting pressure, flow, and volume settings.
The interface may also include a first display portion 218 and a second display portion 220 for displaying various indicators of control parameters or other ventilation parameters. For example, the values of parameters such as: compliance, resistance, end-expiratory flow, end-inspiratory pressure, total circuit pressure, forced tidal volume expired, minute volume expired, spontaneous tidal volume expired, expiratory sensitivity, expiration time, apnea interval, expiration time, total inspiratory and expiratory flow, flow sensitivity, flow trigger, ratio of inspiratory time to expiration time (I: E), inspiratory pressure, tidal volume inhaled, positive end-expiratory pressure (PEEP), average circuit pressure, percentage of oxygen, percentage support settings, peak pressure, peak flow, peak, plateau pressure, plateau time, offset pressure, pressure support level, pressure trigger, tidal volume inhaled, tidal volume expired, rise time percentage, vital capacity and capacity support settings, and the like.
The interface may also include a settings panel 222 with various icons that may be selected to input different settings of the ventilator. For example, ventilation settings and/or control parameters may be entered, for example, by a clinician based on a prescribed treatment regimen for a particular patient, or automatically generated by a ventilator, for example, based on known attributes of the patient (e.g., age, diagnosis, ideal weight, predicted weight, gender, race, etc.) or based on a predicted clinical condition of the patient. The ventilation settings and/or control parameters may include a number of different settings or parameters, such as respiratory rate (f), tidal Volume (VT), PEEP level, etc.
FIG. 3A depicts an example image processing and condition prediction system 300. The system 300 includes a distributed network 304 having one or more ventilators 302 and local or remote computing devices. Each of the ventilators 302 can include the features described above. In the example system 300, a ventilator 302 communicates with other computing devices over a distributed network 304. Network 304 may include a plurality of servers or computing devices/components that are local or remote and connected via wired or wireless communications. For example, additional computers 306, smart devices such as tablet 308, server 310, and/or database 312 may be in communication with ventilator(s) 302 via network 304.
The devices of system 300 may receive ventilation data and control parameters from ventilator 302 and then perform one or more operations described herein. For example, one or more devices in system 300 may convert the received ventilation data and control parameters into images discussed herein, such as human-interpretable images and/or human-non-interpretable images. Devices in system 300 may also perform the ML processing operations described herein. For example, the device may process the generated images to classify the images and provide a prediction of the clinical condition of the patient for which ventilation data was generated. The predicted clinical condition may then be transmitted to ventilator 302 from one or more devices from which ventilation data is received such that ventilator 302 may display the predicted clinical condition. As an example, one of the servers 310 may be a training server that trains the ML model. The training server may periodically retrain the ML model using additional training data, and the training server may then deploy the retrained or updated ML model. The same or another server 310 may be a deployment server that processes ventilation data from a ventilated patient using a trained ML model to predict clinical conditions.
Fig. 3B depicts a schematic diagram showing features of example devices of system 300, such as computer 306, tablet 308, server 310, and/or database 312. In its most basic configuration, device 314 typically includes at least one processor 371 and memory 373. Depending on the exact configuration and type of computing device, memory 373 (which stores, among other things, instructions for performing the image generation and ML processing operations disclosed herein) may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two. Processor 371 may include a Central Processing Unit (CPU) and/or a Graphics Processing Unit (GPU) as well as other possible types of processors. In some examples, ML model calculations may be performed by one or more GPUs. This most basic configuration is illustrated in fig. 3B, indicated by dashed line 375. Further, device 314 may also include storage devices (removable, 377, and/or non-removable, 379) including, but not limited to, solid state devices, magnetic or optical disks, or tape. Similarly, the device 314 may also have input device(s) 383 (such as a touch screen, keyboard, mouse, pen, voice input, etc.) and/or output device(s) 381 (such as a display, speakers, printer, etc.). One or more communication connections 385 may also be included in the environment, such as a LAN, WAN, point-to-point, bluetooth, RF, or the like.
Device 314 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed by processor 371 or other component in device 314. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, solid state storage, or any other tangible and non-transitory medium which can be used to store the desired information.
Communication media embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer readable media.
According to embodiments described below, a system for predicting a clinical condition of a patient, a clinical condition of a ventilator, or an interaction between a patient and a ventilator based on a machine learning model utilizing images of ventilator data is provided. The system accepts current ventilation data (such as pressure, flow, volume, and other data) from the ventilated patient, converts the data to an image format, and then passes the image(s) into a trained ML model that classifies clinical conditions according to the image data. By presenting and analyzing ventilator data in an image format, the system can benefit from image processing and training techniques. Additionally, patient data (vital signs, ventilation data, and other data as described below) may be processed in a manner that maintains privacy by formatting the data in a non-human readable format.
Fig. 4A-4C depict examples of ventilation data presented in the form of human-interpretable images 402A-402C. The human interpretable images 402A-402C are generated based on ventilation data acquired during ventilation of the patient. In the depicted example, the ventilation data includes pressure data, flow data, and volume data. The data may be measured directly from sensors within the ventilator and tube, or the data may be derived or calculated from various measurements. Each image includes ventilation data acquired over a period of time. In the depicted example, the period of time is 24 seconds. In other examples, the period of time may be more or less than 24 seconds. For example, the time period may be less than one minute, less than 30 seconds, or less than 10 seconds. Different images or sets of images may be created from the data over different time periods and then used to identify different types of clinical conditions. The time period may be based on the clinical condition or the type of condition being predicted. For example, some clinical conditions may be classified based on ventilation data for a single breath or for a partial breath (such as an inhalation phase or an exhalation phase of a single breath, which may span several seconds, such as 2 seconds to 10 seconds). Other clinical conditions may require ventilation data for multiple breaths (spanning more seconds or minutes, such as up to 2 minutes, 10 minutes, 30 minutes, or more for 10 seconds) to be properly or accurately classified.
In each of the example human-interpretable images 402A-402C, ventilation data of the patient is displayed in a human-interpretable graphical representation. The graphical representation of the pressure data 404 is represented in a first color (red in the depicted example). The graphical representation of the volumetric data 406 is represented in a second color (green in the depicted example). The graphical representation of the flow data 408 is represented in a third color (blue in the depicted example).
In the example human interpretable images 402A-402C, the pressure graphical representation 404, the volume graphical representation 406, and the flow graphical representation 408 are all provided as graphs over time. For example, the pressure graphical representation 404 is provided as a graph of pressure data versus time over a period of time during which ventilation data is captured. In image 402A in fig. 4A, three different breaths corresponding to the peaks of the pressure data can be seen. Image 402B in fig. 4B depicts an additional number of breaths (corresponding to peaks in the pressure signal) within the same time frame, which indicates a higher respiratory rate. Image 402B also depicts different respiratory times and proportions than images 402A-B. The control parameters of patient ventilation are also different for each of the images 402A-402B, from which the images 402A-402B are generated.
As discussed above, in this example, the period of capturing ventilation data is 24 seconds. During this period (e.g., the 24 seconds), ventilation data is captured or recorded every 20 milliseconds (ms) (other sampling rates are possible). Accordingly, in the depicted example, there are 1200 points in time and thus 1200 data points for each type of ventilation data (e.g., pressure, flow, and volume). Each pressure data point is represented by a single red pixel in the image, or multiple pressure data points are averaged together and appear as a single red pixel. For example, the leftmost pixel may be used for pressure at a first point in time during the time period. Similarly, the volumetric data is presented by green pixels and the traffic data is presented by blue pixels. These data points are plotted against time in the image. The black areas of human interpretable images 402A-402C are pixels that do not represent any type of ventilator data. Thus, the values of all three channels of the pixel at these locations are set to zero—thus providing their black appearance.
While the pressure graphical representation 404, the volume graphical representation 406, and the flow graphical representation 408 are plotted as a scatter plot with respect to time, other data visualizations are possible. For example, ventilation data may also be presented as a line graph, spiral graph, heat graph, polar graph, bar graph, or other similar data visualization type. Further, as used herein, a "graphical representation" is a non-textual representation of data. Additionally, different types of data graphs may also be presented. For example, a data loop such as a volume-pressure loop or a flow-pressure loop may be provided as part of the human readable images 402A-402C.
The colors used to represent each of the graphical representations 404-408 may be selected based on the colors being primary colors in a color scheme for the pixels of the image. For example, in the example human interpretable images 402A-402C, a red-green-blue (RGB) color scheme is utilized for the images. Accordingly, each pixel of the human interpretable image 402A-402C may be defined by three channels, a red channel, a green channel, and a blue channel. Since the graphical representations 404-408 are each represented by a different primary color of the color scheme, the data point of each graphical representation 404-408 may be determined based on an examination of the channel of the pixels where the overlap occurs for the points in time at which the representations overlap. Color schemes other than RGB (e.g., cyan-magenta-yellow-black (CMYK); hue, saturation, brightness (HSL), etc.) may also be utilized and the corresponding channels may then be used in a similar manner, although the channels may correspond to different display colors.
The human interpretable image 402A-402C may also include a scale indicator 410. The scale indicator 410 represents the relative scale of each of the graphical representations of ventilation data provided in the human interpretable image. The scale indicator 410 includes a segment of each type of ventilation data that is provided with a graphical representation in a human-interpretable image. In the depicted example, the scale indicator 410 includes three segments. The three segments include a first segment corresponding to the pressure graphical representation 404, a second segment corresponding to the volume graphical representation 406, and a third segment corresponding to the flow graphical representation 408. Each segment of the scale indicator 410 has a color corresponding to a corresponding graphical representation of ventilation data. In the depicted example, the first segment has a red color because the pressure graphical representation 404 is represented as a red pixel, the second segment has a green color because the volume graphical representation 406 is represented as a green pixel, and the third segment has a blue color because the flow graphical representation 408 is represented as a blue pixel. The height of each segment (e.g., the number of pixels per segment) represents the relative proportions of each of the graphical representations of the data. The segments of the scale indicator may be generated based on a range of values of the ventilation data. Each proportional segment may be based on a maximum and a minimum of the corresponding ventilation data. For example, the height of the red segment may be based on the maximum and minimum values of the pressure data in the pressure graphical representation 404.
The human interpretable image 402A-402C also includes a control parameter graphical representation 412 of the control parameter. These control parameters include parameters such as: respiration rate setting (e.g., in units of breaths per minute), maximum flow setting (e.g., in units of liters per minute (L/m)), flow sensitivity setting (e.g., in units of L/m), tidal volume setting (e.g., in units of liters), PEEP setting (e.g., in units of cmH) 2 O), oxygen percentage setting (ranging between 21% and 100%), plateau setting (e.g., in seconds), flow pattern setting (e.g., wave or ramp), ventilation mode (e.g., auxiliary control (a/C), spontaneous (SPONT), synchronous intermittent forced ventilation (SIMV), bi-level, CPAP, etc.), forced mode type (e.g., pressure Control (PC), volume Control (VC)), spontaneous mode type (e.g., pressure Support (PS), tube Compensation (TC), volume Support (VS), proportional Assist (PA)), exhalation sensitivity setting (e.g., in percent or L/min), support pressure setting (e.g., in cmH) 2 0) Predicted body weight (e.g., in kilograms), PCV inspiratory pressure setting (e.g., in cmH) 2 0) An inspiratory component of an inspiratory to expiratory time (I: E) ratio, an I: E ratio, an expiratory time component of an I: E ratio, a rise time percentage setting, a PAV+ percentage support setting, a monitored expiratory tidal volume (e.g., in liters), a monitored peak airway pressure (e.g., in cmH) 2 0) The monitored spontaneous percentage inspiration time, the monitored end-expiratory flow (e.g., in L/min), and other types of settings and parameters. In the depicted example, control parameter graphical representation 412 is presented as a bar graph. Each bar of the bar graph represents a different control parameter, and the height of each bar (e.g., the number of pixels in the bar) represents the value of the corresponding control parameter. The higher bar means a higher value of a particular control parameter. As an example, the first bar 413 may represent the inspiration rate and the second bar 414 may represent the maximum flow setting. The bar graphs may alternate in color to help visually distinguish different control parameters (e.g., different bars from one another). Although the control parameter graphical representation 412 is presented as a bar graph, other data visualization types are possible, such as pie charts, scatter charts, line charts, and the like.
In an embodiment, the data in the human interpretable images 402A-402C are arranged in a defined layout. The defined layout includes a plurality of portions or regions displaying a graphical representation. For example, in human interpretable images 402A-402C, scale indicator 410 is located at the same position to the left of each image. Similarly, ventilation data graphical representations 404-408 are displayed as the same middle portion across each of human-interpretable images 402A-402C. The control parameter graphical representation 412 is displayed in the same bar graph portion to the right of each of the human interpretable images 402A-402C. By using a proprietary format for the data types in each portion of the image, a machine learning model can be trained to classify the images according to clinical conditions, as discussed further herein. Additionally, the image may be platform or ventilator independent through the use of a defined layout. For example, the image may be generated from ventilation data generated from any type of ventilator.
Fig. 4D is a copy of fig. 4C, where the black and white colors are shown for clarity only. Pressure data 404 is shown in dashed lines representing red (or other designated color channels), volume data 406 is shown in dotted lines representing green (or other designated color channels), and flow data 408 is shown in dashed lines representing blue (or other designated color channels).
Fig. 4E provides an example layout 402E of a human interpretable image. Layout 402F includes a plurality of sections including a scale section 420, a ventilation data section 422, and a control parameter section 424. For each of the human-interpretable images 402A-402C generated according to the example defined layout, a scale indicator 410 is displayed in a scale portion 420, ventilation data graphical representations 404-408 are displayed in a ventilation data portion 422, and a control parameter graphical representation 412 is displayed in a control parameter portion 424. By organizing the data in a consistent layout, the ML model can be trained to classify images provided to it in the same layout.
The size of each of these portions may be based on the amount of data to be represented in the human interpretable image. For example, the width of the ventilation data portion 422 may be based on the time period in which the data is acquired and the sampling frequency within that time period. In the example discussed above, the time period is every 24 seconds and the sampling frequency is every 20ms, resulting in 1200 points in time in the dataset. Thus, to represent 1200 points in time as ventilation data versus time in a scatter plot, the width of the ventilation data portion 422 needs to be at least 1200 pixels. Fewer pixels may be used if the data is averaged or sampled at different frequencies. Similarly, the width of the control parameter portion 424 may be based on the number of different control parameters to be included in the control parameter graphical representation 412. When using a bar graph format (such as the formats utilized in the example human interpretable images 402A-402C above), one column of pixels is utilized for each different control parameter. Accordingly, the width of the control parameter portion 424 may be at least as many pixels as there are different control parameters.
Fig. 4F depicts another example layout 402F of a human interpretable image. Similar to layout 402E depicted in fig. 4E, layout 402F includes multiple portions. However, the layout 402F has multiple portions for displaying ventilation data instead of a single ventilation data portion. In the example shown, layout 402F includes a first ventilation data portion 426, a second ventilation data portion 428, and a third ventilation data portion 430. Each of ventilation data sections 426 through 430 may be used to display a particular type of ventilation data. For example, the first ventilation data portion 426 may be used to display pressure data, the second ventilation data portion 428 may be used to display volume data, and the third ventilation data portion 430 may be used to display flow data. Each of ventilation data sections 426 through 430 may also have a corresponding proportion indicator section 432 through 436. For example, the first scale indicator portion 432 is configured to display a scale indicator corresponding to ventilation data displayed in the first ventilation data portion 426. By utilizing a layout having separate ventilation data portions 426-430, such as layout 402F, different types of ventilation data (e.g., pressure, volume, flow) may not be displayed as separate colors because the different types of ventilation data do not overlap each other in the resulting image.
The images discussed above have been referred to herein as "human interpretable" images because humans can generally identify trends and relative values of the depicted graphical representations. For example, pressure, volume, and flow data are displayed as a scatter plot of data versus time that a human can interpret or understand. For example, a human may recognize shapes and patterns such as when pressure increases or decreases. For example, the human interpretable image may include a scatter plot, a bar plot, a pie chart, a line plot, a grid, and/or text. However, the present technology may also utilize images that are not interpretable by humans. The human non-interpretable image does not include a graphical representation that can be interpreted by a human. Instead, the ventilation data and control parameters are incorporated as values into the channels of the pixels themselves, as discussed further below. Human non-interpretable images provide additional advantages such as storing and transmitting data in a manner that can maintain patient privacy. For example, since a human non-interpretable image cannot be interpreted by viewing the image, additional privacy is created by storing ventilation data in this format.
Fig. 5A depicts an example human non-interpretable image 500. An example human non-interpretable image 500 is 61 x 61 pixels. Fig. 5A depicts an example human non-interpretable image 500 in black and white for clarity purposes, but fig. 5B depicts an example of an actual example human non-interpretable image 500 in color. In fig. 5B, a human-non-interpretable image 500 is shown at the top of fig. 5B at full size, and an enlarged version is shown below for illustration and discussion. Pixels in image 500 are arranged in pixel rows 504 and pixel columns 506. Each of the pixels is defined by a color channel. In the depicted example, the human-non-interpretable image 500 is an RGB image, and each pixel 502 is defined by a red channel, a green channel, and a blue channel. The pixels may also be defined by transparent channels that are not utilized in this example. Thus, each pixel may be defined by an array of three channel values such as { Color1, color2, color3 }. These three values define the color of a single pixel. For a human non-interpretable image 500, ventilation data and control parameters are stored as channel values for pixels 502.
The location of each pixel may be defined by its row (i) and column (j), and each pixel may be represented as px (i,j) . For example, pixel 502 in the upper left-most corner of image 500 may be represented as px (1,1) Because the pixel is in the first row and in the first column. Similarly, pixels in the first row and in the second column508 may be represented as px (1,2) And the pixels 510 in the second row and in the first column may be represented as px (2,1) 。
The location of a pixel defines the type of data represented by the pixel and the point in time at which the data was captured (if applicable). For example, pixels px in the first row and first column of human-non-interpretable image 500 (1,1) 502 may correspond to ventilation data, such as pressure, volume, or flow, recorded at a first point in time (e.g., time=t1) in a time period. Second pixel px in first row 504 (1,2) 508 may represent ventilation data recorded at a second point in time (e.g., time=t2=t1+20 ms) in the time period. The next pixel may continue to represent ventilation data at a subsequent point in time. In an example of sampling data for 1200 points in time, the first 1200 pixels may represent pressure data at each of the 1200 points in time. The first 1200 pixels may include a plurality of rows of pixels. For example, the first 20 rows may represent pressure data. The next 1200 pixels may represent the volume data at each of the 1200 points in time, and the next 1200 pixels may then represent the flow data at each of the 1200 points in time. Thus, unlike human-interpretable image 400, which includes a large number of blank or black pixels, each pixel or a majority of pixels in human-non-interpretable image 500 represent data. Accordingly, the human-non-interpretable image 500 may be significantly smaller (e.g., fewer pixels) than the human-interpretable image 400 while still representing the same amount of data. Thus, the total size of training data required to train the ML model can be significantly smaller, which saves memory and processing resources. The use of smaller images may also allow faster processing and training times of the ML model so that classification of clinical conditions may be provided more quickly.
Since many image formats only support integer values of the channel values of pixels 502, ventilation data, typically provided in a floating decimal format, may need to be converted to an integer format for storage as one or more channel values. Conversion to integer format may be accomplished by a variety of techniques. One technique is to use modulo arithmetic. The modulo operation returns the remainder of the division in integer format. The quotient of the division operation may be stored in a first channel and then the remainder (from the modulo operation) may be stored in a second channel. The symbols of the data may then be stored in the third channel.
As an example, the pressure data at the first point in time may be represented as p 0 . Represents p 0 The channel values of the pixels of (a) may be { Color1, color2, color3} = { p 0 /256 , p 0 mod 256, sign }. In other words, the first channel value (Color 1) is equal to p 0 The integer value of the quotient divided by 256. The second channel value (Color 2) is equal to p 0 The modulus (e.g., modulus/remainder) of the division operation divided by 256. The third channel value (Color 3) represents p 0 Is a symbol of (c). Thus, the three channel values represent the pressure data at the first point in time, and the display color of the pixel 502 thus corresponds to the pressure data at the first point in time. In the above operation, the divisor is set to 256, but other divisor values may be utilized. Additionally, the pressure value (or corresponding ventilation data value) may be first scaled up or down such that the quotient of the division operation is non-zero for at least some data points in the data set. For example, the pressure value may be multiplied by a scalar, such as 10 or 100, before performing the quotient or modulo operation. Thus, the location of the pixel defines the ventilation data type and/or point in time, and the color of the pixel defines the value of the ventilation data.
The above-described operations may be performed for each of the pressure data points in the data set to define a pixel for each of the pressure data points. For each of the volume data point and the flow data point, a similar operation may then be performed for the volume data and the flow data defining the pixel.
The other subset of pixels may correspond to a control parameter. For example, the last row (or the last few rows) of pixels may correspond to a control parameter. One pixel may correspond to one control parameter. For example, 25 pixels may be used to represent 25 different control parameters. The location or positioning of pixels in the human non-interpretable image 500 indicates the control parameters represented by the pixels. For example, pixel px (61,1) The gettering rate can be expressed, and the pixel px (61,2) The maximum flow setting may be indicated.
Similar to the way ventilation data is stored as channel values, the values of the control parameters may be stored as channel values, as discussed above. For example, a first channel value may represent a sign of a control parameter value, a second channel value may represent a quotient of the control parameter value from a division operation, and a third channel value may represent a modulus/remainder from the division operation. In some examples, depending on the value of a particular control parameter, the value may be first scaled up (or down) before the division operation. Thus, similar to ventilation data, the color of the respective pixel represents the value of the respective control parameter, and the position of the pixel represents the represented control parameter.
In an embodiment, a set of images 500 having the same defined layout or mapping of pixels 502 is generated and used to train and utilize the ML model. For example, defining a layout or map may define what data is to be stored in each pixel. As an example, the layout may set a first pixel to represent pressure data recorded at a first time, a second pixel to represent pressure data at a second time, and so on. The number of pixels required to represent each type of ventilation data is based on the time period during which the data is acquired and the rate at which the data is sampled during that time period. For example, 3600 pixels may be utilized for three respiratory parameters (e.g., pressure, flow, and volume) and 1200 points in time. The defined layout of the human non-interpretable image 500 may include each pixel px (i,j) An array or matrix of corresponding ventilation data types or control parameters. Thus, each individual pixel in each generated human-non-interpretable image 500 may represent the same type of data. Accordingly, the layout need not have any visual significance to humans. The layout may be arbitrary as long as the layout is consistent for training and use with an ML model that can identify patterns within the image 500 that are not interpretable by humans. Additionally, similar to the human-interpretable image, the human-non-interpretable image 500 may be platform or ventilator independent through the use of a defined layout. For example, it may be based on breathing from any type The machine-generated ventilation data generates a human-non-interpretable image 500.
The collection of these human-non-interpretable images 500 corresponding to the known clinical condition of the patient can then be used to train the ML model. New ventilation data and control parameters are then received from the ventilated patient and the data is converted into a human non-interpretable image 500 having the same layout. The newly generated human-non-interpretable image 500 may then be provided as an input to a trained ML model that can classify the new human-non-interpretable image 500 as corresponding to a particular clinical condition.
Fig. 6A-6B depict another example human non-interpretable image 600. Fig. 6A depicts an example human non-interpretable image 600 in black and white for clarity purposes, but fig. 6B depicts an example of an actual example human non-interpretable image 600 in color. In fig. 6B, a human-non-interpretable image 500 is shown at the top of fig. 6B at full size, and an enlarged version is shown below for illustration and discussion. The human-non-interpretable image 600 is similar to the human-non-interpretable image 500 in fig. 5A and 5B, but the human-non-interpretable image 600 has a different image format and/or layout that allows data formatted as floating decimal points to be stored as channel values. One example of such an image format is the float4 image format. Since this image format allows decimal values to be stored as channel values, additional information can be stored in each pixel via its channel value. Additionally, the image format may provide a more useful channel defining each pixel 602. For example, the image 600 may be saved using an RGBA format with red, blue, and alpha channels. The image format may be based on Nvidia The saveImageRGBA method of the platform. Because of these increased capacities for storing data as channel values, the human-non-interpretable image 600 may be substantially smaller than the human-non-interpretable image 500 in fig. 5A-5B. For example, the example human non-interpretable image 60 depicted in fig. 6A-6B0 is 31×31 pixels, but stores as much information as the example human-non-interpretable image 500 in fig. 5A-5B (which is 61×61 pixels). Pixels in image 600 may be addressed (e.g., by rows and columns) in a similar manner as image 600. For example, the location of each pixel may be defined by its row (i) and column (j), and each pixel may be represented as px (i,j) . The pixel 602 in the upper left-most corner of the image 600 may be represented as px (1,1) Because the pixel is in the first row and in the first column. Similarly, the pixels 608 in the first row and in the second column may be represented as px (1,2) And the pixels 610 in the second row and in the first column may be represented as px ( 2,1 ) 。
For image 600, the data need not be converted to an integer format before being stored. Multiple data points are stored in the same pixel as different channel values. For example, the array of channel values for pixel 602 may be { Color1, color2, color3, color4}, where Color1 may correspond to red, color2 may correspond to green, color3 may correspond to blue, and Color4 may correspond to transparent. Together, these four values define the color and/or display of the single pixel. Other color schemes are also possible. Each channel value may represent a different data point, such as ventilation data at a different point in time. Accordingly, a plurality of different types of ventilation data or a plurality of points in time of one type of ventilation data may be stored within a single pixel. Thus, in some examples, data from three separate waveforms (pressure, volume, and flow) in the human-interpretable image 402 may be stored in a single pixel in the human-non-interpretable image 500. Similarly, the data stored in the four pixels of the human-non-interpretable image 500 may be stored in a single pixel of the human-non-interpretable image 600.
As an example, the pressure data at the first, second, third and fourth time points may be represented by p 0 、p 1 、p 2 And p 3 And (3) representing. Each of these pressure data points may be stored as a different channel value. For example, the array of channel values for the first pixel may be: { Color1, color2, color3, color4}={p 0 ,p 1 ,p 2 ,p 3 }. Thus, the first pixel 602 may represent the first four pressure data points. The pixel 602 of the human non-interpretable image 600 is capable of storing up to four times as many data points as the pixel 502 of the human non-interpretable image 500. Thus, to represent 1200 data points, only 300 pixels may be used. As another example, different types of ventilation data at a single point in time may be stored in a pixel. For example, the pixel value array (channel value array) of the first pixel may be: { Color1, color2, color3} = { p 0 ,v 0 ,q 0 }, where v 0 Is the volume at the first point in time, and q 0 Is the flow at the first point in time. In such examples, a transparent channel (Color 4) may not be utilized, or a transparent channel may be utilized to store yet another type of ventilation data.
Notably, unlike the human non-interpretable image 500 in fig. 5, in the example human non-interpretable image 600, a separate channel may not be utilized to store the symbols of ventilation data. To adjust for the lack of explicit sign designation, the ventilation data may be modified such that all ventilation data are positive values. The modifying may include shifting the ventilation data by a defined amount. For example, the pressure data, volume data, and/or flow data may be increased by a constant amount, which results in the corresponding data set being above zero (without any negative values). The ventilation data may also be normalized to, for example, a 1 value before being stored as a channel value.
The storage of control parameters may also be similarly stored, as the values of multiple control parameters may be stored as different channel values for a single pixel. As an example, the values of four different control parameters may be represented as c 0 、c 1 、c 2 And c 3 . Each of these values may be stored as a different channel value. For example, the array of channel values for the pixels corresponding to the control parameter may be: { Color1, color2, color3, color4} = { c 0 ,c 1 ,c 2 ,c 3 }. Thus, a single pixel is able to store the values of four different control parameters. Accordingly, in an example utilizing 20 control parameters, one mayFive pixels are used to represent 20 control parameters at a particular point in time.
The human-non-interpretable image 600 may also have a defined layout or mapping of pixels such that the human-non-interpretable image may be consistently generated and used to train and utilize the ML model. For example, defining a layout or map may define what data to store in each channel of each pixel. As an example, the layout may set the first 300 pixels to store pressure data in order, the second 300 pixels to store volume data in order, and the third 300 pixels to store flow data in order. The number of pixels required to represent each type of ventilation data is based on the time period during which the data is acquired and the rate at which the data is sampled during that time period. Defining the layout may also set which pixels store which control parameter values. The format of the layout may be similar to the layout discussed above with respect to the human-non-interpretable image 500. For example, the defined layout of the human non-interpretable image 600 may include each pixel px (i,j) An array or matrix of corresponding ventilation data types or control parameters. As described above, new ventilation data from the patient is converted to this format and transferred to the trained ML model to classify the clinical condition of the patient.
The use of human non-interpretable images, such as human non-interpretable image 500 or human non-interpretable image 600, for the ML model also violates the recognized wisdom in machine learning and computation. For example, traditional wisdom may suggest that training an ML model using raw ventilation data may be more efficient because raw ventilation data may include fewer bytes and a smaller overall size. Of course, converting ventilation data to images that are not visually meaningful to humans prior to training and using the ML model would also be considered disadvantageous prior to the present disclosure, as there would be no pattern discernable to humans.
Fig. 7 depicts an example method 700 for predicting a clinical condition of a ventilated patient. At operation 702, ventilation data and/or control parameters during ventilation of a patient are acquired or received over a period of time. The ventilation data may include, for example, pressureForce data, flow data, and/or volume data, among other types of ventilation data. Additionally, additional patient data may also be received. The additional patient data may include, for example, CO of the air exhaled from the patient 2 Measurement, fractional oxygen measurement (FiO) 2 ) Heart rate of patient, blood pressure of patient, blood oxygen saturation (SpO) of patient 2 ) And the like. Ventilation data may be acquired by the ventilator, such as from sensors of the ventilator. For example, the pressure sensor may measure pressure values at one or more locations of the ventilator or patient circuit. The pressure data may include inhalation pressure, exhalation pressure, Y-piece pressure, and/or other pressure values. The flow sensor may also be used to measure flow values at one or more locations of the ventilator or patient circuit. The flow data may include inhalation flow, exhalation flow, net flow, or other flow values. The volumetric data may be calculated from flow data (e.g., flow integral) rather than measured. Ventilation data may be sampled or recorded at a sampling rate (e.g., every 5ms, 20ms, or other frequency). The control parameters may be captured by recording the control parameters at the same or different sampling rates.
Ventilation data and/or control parameters may also be obtained from the ventilator, such as by a remote computing device (e.g., a server). The ventilator may transmit ventilation data and/or control parameters to a server, wherein the data is received by the server. The ventilator may continuously send ventilation data and/or control parameters as they are acquired or obtained. Alternatively, the ventilator may send ventilation data and/or control parameters batchwise according to the time period. For example, data may be separated into batches or batches based on time periods. As an example, if the time period is 24 seconds and the sampling rate is once every 20ms, a data batch of 1200 time points is generated. Accordingly, ventilation data and/or control parameters may be sent or stored as a batch during each time period.
At operation 704, an image is generated based on the ventilation data and/or control parameters acquired in operation 702. Images may be generated for batches of ventilation data and/or control parameters.For example, one image may be generated for all ventilation data and/or all control parameters acquired during the time period. May also be based on CO such as air exhaled from the patient 2 Measurement, fractional oxygen measurement (FiO) 2 ) Heart rate of patient, blood pressure of patient, blood oxygen saturation (SpO) of patient 2 ) And the like to generate an image. In examples where the server obtains ventilation data and/or control parameters in operation 702, the ventilator itself or the server may generate the image.
The image generated in operation 702 may be a human-interpretable image, such as human-interpretable images 402A-402C depicted in fig. 4A-4C, or a human-unreadable image, such as human-unreadable image 500 in fig. 5A-5B, or human-unreadable image 600 in fig. 6A-6B. Generating the image may include converting ventilation data and/or control parameters into an image format according to a defined layout of the image. For example, generating a human interpretable image may include generating graphical representations of ventilation data, such as pressure graphical representations, volume graphical representations, and flow graphical representations. Generating the human-interpretable image may further include generating a scale indicator and a control parameter graphical representation. These graphical representations may then be incorporated into the images in the portions defined by the defined layout. For example, a graphical representation of ventilation data may be incorporated into the ventilation data portion, a scale indicator may be incorporated into the scale portion, and a control parameter graphical representation may be incorporated into the control parameter portion. A graphical representation of the additional patient data may also be generated in the image. For example, where additional patient data (e.g., heart rate) changes over time, the additional patient data may be plotted versus time, similar to ventilation data. In other examples, an average or single value of patient data may be utilized and presented as a bar in a bar graph or in a similar manner as generating a graphical representation of control parameters.
Generating the human-non-interpretable image may include storing ventilation data and/or control parameters as channel values for pixels in the human-non-interpretable image, as discussed above with respect to fig. 5A-5B and fig. 6A-6B. In case the image format of the human non-interpretable image requires integer values of different channel values, the ventilation data and/or the control parameters may be converted into integer format by division and modulo operation. Accordingly, a first channel value for a pixel may represent a first portion of a data point value at a point in time, and a second channel value for a pixel may represent a second portion of the data point value. The third channel of pixels may represent the sign of the data point value. In other examples, where the human-non-interpretable image has a format that supports floating decimal values, a plurality of data point values may be stored in channel values of the pixels, as described above with respect to fig. 6A-6B. In some examples, the plurality of data point values may be modified and normalized such that no negative values exist. The ventilation data and/or control parameters may be stored in channel values of pixels according to a defined layout of pixels of the human non-interpretable image. In some examples, multiple images are generated for a single data batch at operation 704. For example, a human-interpretable image and one or more human-non-interpretable images may be generated for the data batch.
In other examples, the image generated at operation 704 may be a screenshot of a ventilator display, such as the GUI displayed in fig. 2. For example, the ventilator display may include waveforms of ventilation data versus time and indicators of control parameters. Thus, a screenshot of the ventilator display may provide images representing ventilation data and/or control parameters over a period of time. The screen shots may be altered or cropped to remove some information from the screen, such as graphics, text, or other display components other than waveforms and/or indicators of control parameters. As an example, in the example GUI 204 in fig. 2, the screen shots may be cropped such that only certain portions, such as one or more of the graphs 206-210 and/or one or more of the loops 212-214, remain. In some examples, the patient data segment 216 and/or one of the first display portion 218 and the second display portion 220 may also be retained.
In some examples, the ventilator may enter a predictive mode in which ventilation data waveforms and control parameters are displayed on the screen in a defined manner (such as according to a defined layout). The ventilator may enter the prediction mode based on a manual input indicating the prediction mode. In other examples, the ventilator may automatically enter the prediction mode according to a schedule or time frequency.
At operation 706, the image generated at operation 704 is provided to a trained Machine Learning (ML) model. The ML model is trained based on previous images having the same defined layout as the image generated in operation 704. Training the ML model may include generating or acquiring a large dataset of batches of ventilation data and/or control parameters for a patient having a known condition. May be derived from an actual patient or from a simulated patient or lung (such as IngMarASL->Breathing simulator) generates ventilation data and/or control parameters. When a breathing simulator is utilized, conditions may be programmed into the breathing simulator, and batches of ventilation data and/or control parameters may be recorded when the breathing simulator is in a stable, non-alarm condition. Whether the training data batches are generated from a breathing simulator or from ventilation of the actual patient, each of the data batches has a known associated condition. Thus, known batches of data can be converted into images according to a defined layout, and these images can be marked according to known conditions. The ML model may be trained based on the labeled images by a supervised training approach. Additionally, the trained ML model may be stored on a server and/or ventilator and used to generate predictions of clinical conditions upon receiving images having the same defined layout as the images used to train the ML model.
The ML model may be a neural network, such as a convolutional neural network, a deep neural network, a recurrent neural network, or other type of neural network. Convolutional neural networks may be preferred in some embodiments due to the benefits of image classification techniques. Other types of ML models, such as Hidden Markov Models (HMMs), support Vector Machines (SVMs), k nearest neighbors, etc., may also be utilized. Because ventilation data and/or control parameters are converted to images, the present technology is able to take advantage of significant advances and investments in image recognition and classification techniques through the use of specific ML models, which have been the subject of significant advances, development and investments. Examples of such ML models include GoogleNet, alexNet, VGG, inception, resNet, squeezeNet, and the like. While many of these models have focused on areas such as autopilot, the inventors have recognized that once ventilation data is integrated into an image, these models can be applied to the data, as discussed above.
At operation 708, a predicted condition of the patient is generated based on the output from the trained ML model. The predicted condition may be generated by the ventilator or by the server depending on the location where the trained ML model is stored. The output from the ML model may include a classification of the image provided as an input. The classification itself may be a predicted condition. In other examples, the classification may be used to determine a condition of the patient. The condition of the patient may include conditions such as asthma, acute Respiratory Distress Syndrome (ARDS), emphysema, chronic Obstructive Pulmonary Disease (COPD), alveolar overexposure, attempting to breathe simultaneously in forced ventilation mode, preparing for weaning, and the like. The ventilator conditions or classification of conditions based on interactions between the patient and the ventilator may also be classified. For example, the identification of two or more breaths may be identified and classified by an ML model. Such identification may indicate that the flow sensitivity setting is too low.
The output of the ML model may also include confidence values for the provided classifications. The confidence value may be a percentage of confidence that the classification is correct. Thus, the confidence value may also be associated with a prediction of the condition of the patient. In some examples, if the confidence value is below a particular threshold (e.g., 90%), the classification of the condition may not be displayed, or the format in which the classification is displayed may be adjusted to highlight low confidence in the classification.
At operation 710, a predicted condition of the patient may be displayed or provided for display. For example, if the server generates the predicted condition in operation 708, the predicted condition may be transmitted to the ventilator for display on a display screen communicatively coupled to the ventilator. The ventilator may then display the predicted condition on a display screen. The display screen of the server may also or alternatively display the predicted condition. In an example of a ventilator generating a predicted condition, the ventilator may display the predicted condition on a display screen of the ventilator. Displaying the predicted condition may also include displaying a confidence score associated with the predicted condition. In other examples, the predicted condition may alternatively or additionally be displayed on another remote computing device. For example, the predicted condition may be displayed at a nurse station or other central monitoring location within a hospital or medical facility.
Fig. 8 depicts a method for training an ML model for classifying a clinical condition of a patient. At operation 802, a known data set can be generated. The known dataset is a dataset of a batch set of ventilation data and/or control parameters of a patient suffering from a known condition. May be derived from an actual patient or from a simulated patient or lung (such as IngMarASL of (C)Breathing simulator) generates ventilation data and/or control parameters. When a breathing simulator is utilized, conditions may be programmed into the breathing simulator, and batches of ventilation data and/or control parameters may be recorded when the breathing simulator is in a stable, non-alarm condition. Whether the training data batches are generated from a breathing simulator or from ventilation of the actual patient, each of the data batches has a known associated condition. Thus, known batches of data can be converted into images according to a defined layout, and these images can be marked according to known conditions. Depending on which type of image the ML model is to be used for, the generated image may be a human non-interpretable image or a human interpretable image. Each of the generated images may be stored with or correlated to their corresponding known conditions. / >
At operation 804, the known data set is divided into a training set and a testing set. For example, 10% of the images in the known dataset may be used as the test set and the remaining 90% of the images may be used as the training set.
At operation 806, the ML model is trained using the training set. Training may be performed according to a supervised training algorithm. Providing images in the training dataset as inputs and corresponding conditions as known outputs allows training of the ML model to be performed. Training the ML model may include modifying one or more variables or parameters based on the provided input (image) and known output (corresponding condition). As an example, coefficients of the neural network may be adjusted to minimize a predefined cost function that evaluates differences between the output of the neural network and known conditions.
After training the ML model, the ML model is tested in operation 808. The ML model may be tested using the test data generated in operation 804 to determine the accuracy and performance of the ML model. At operation 810, a determination is made as to whether the performance of the ML model is acceptable based on the performance of the ML model in test operation 808. This determination may be made based on differences between the output of the ML model and the test dataset. If the performance is acceptable or within a determined tolerance, the trained ML network is stored at operation 812 for later use with live and real-time images generated from ventilation data of the ventilated patient. If the performance is unacceptable or outside of a predetermined tolerance, the method 800 flows back to operation 802 where additional data is generated to further train the ML model. The method 800 continues and repeats until the ML model generates acceptable results within a predetermined tolerance.
In addition to predicting a condition, recommended ventilation settings or patterns may be provided and/or displayed. The recommended ventilation settings or patterns may be based on previous batches of data that were used to train the ML model. The recommended ventilation settings or patterns may also be based on best practices associated with the predicted condition. For example, for patients predicted to have a clinical condition of ARDS, ventilation settings (e.g., increased PEEP or decreased tidal volume settings) and/or specific ventilation patterns may be recommended, as medical literature and studies indicate that specific ventilation patterns are well suited for patients with ARDS conditions. As another example, if a condition of two or more breaths is detected, the recommended setting may include changing the flow sensitivity setting. A prompt to activate the recommended ventilation setting or mode may also be displayed, and after the prompt is selected, the ventilation setting or mode may be activated or applied. In other examples, recommended ventilation settings may be automatically activated or applied after the predicted condition is generated.
In some examples, confirmation/rejection prompts to a doctor or medical professional may also be displayed along with the predicted condition. The confirm/reject prompt allows the medical professional to confirm or agree or reject or disagree with the predicted condition. The input of the validation/rejection prompt may then be used to positively or negatively augment the trained ML model.
Those skilled in the art will recognize that the methods and systems of the present disclosure may be implemented in many ways and as such should not be limited by the foregoing aspects and examples. In other words, functional elements performed by a single component or by multiple components in various combinations of hardware and software or firmware, as well as individual functions, may be distributed among software applications at the client or server level, or both. In this regard, any number of the features of the different aspects described herein may be combined into a single or multiple aspects, and alternative aspects are possible with fewer or more than all of the features described herein.
The functions may also be distributed, in whole or in part, among the various components in a manner now known or yet to be known. Thus, numerous software/hardware/firmware combinations are possible in implementing the functions, features, interfaces and preferences described herein. Furthermore, the scope of the present disclosure encompasses conventional known ways for carrying out the described features and functions and interfaces, as well as such variations and modifications that may be made to the hardware or software firmware components described herein, as will be now and later understood by those skilled in the art. Additionally, aspects of the present disclosure are described above with reference to block diagrams and/or operational illustrations of systems and methods according to aspects of the disclosure. The functions, acts and/or acts noted in the blocks may occur out of the order noted in the corresponding flowcharts. For example, two blocks shown in succession may, in fact, be executed substantially concurrently or the blocks may be executed in the reverse order, depending upon the functionality and the implementation involved.
Further, as used herein and in the claims, the phrase "at least one of element a, element B, or element C" is intended to convey any one of: element a, element B, element C, elements a and B, elements a and C, elements B and C, and elements A, B and C. In addition, those skilled in the art will understand the extent to which terms such as "about" or "substantially" are conveyed in accordance with the measurement techniques used herein. To the extent that such terms may not be explicitly defined or understood by those skilled in the art, the term "about" shall mean plus or minus ten percent.
Many other variations are possible which will readily suggest themselves to those skilled in the art and which are encompassed in the spirit of the present disclosure and as defined in the appended claims. Although various aspects have been described for purposes of this disclosure, various changes and modifications may be made that are well within the scope of the disclosure. Many other variations are possible which will readily suggest themselves to those skilled in the art and which are encompassed in the spirit of the present disclosure and as defined in the claims.
Claims (20)
1. A computer-implemented method for identifying a condition from ventilation data, the method comprising:
acquiring ventilation data of a patient ventilated during a period of time;
converting the ventilation data into a human-non-interpretable image having a plurality of pixels, wherein each of the pixels of the human-non-interpretable image is defined by at least a first channel, and a first channel value of a first pixel represents at least a portion of the value of the ventilation data;
providing the human-non-interpretable image as an input to a trained machine learning model; and
based on output from the trained machine learning model, a predicted condition of the patient is generated.
2. The method of claim 1, wherein the ventilation data comprises at least one of pressure data, flow data, or volume data.
3. The method of claim 1, wherein the first pixel is further defined by a second channel value and a third channel value.
4. A method as claimed in claim 3, wherein the second channel value represents another portion of the value of the ventilation data and the third channel value represents a sign of the ventilation data.
5. A method as claimed in claim 3, wherein:
the value of the ventilation data represented by the first channel is a first value of ventilation data at a first point in time;
the second channel value represents a second value of ventilation data at a second point in time; and is also provided with
The third channel value represents a third value of ventilation data at a third point in time.
6. The method of claim 5, wherein the first pixel is further defined by a fourth channel value, and the fourth channel value represents a fourth value of ventilation data at a fourth point in time.
7. The method of claim 1, further comprising acquiring control parameters during the time period, wherein the control parameters are also converted into the human-non-interpretable image.
8. The method of claim 1, wherein the human-non-interpretable image has a size of less than 4000 pixels.
9. The method of claim 1, wherein the human-non-interpretable image has a size of less than 1600 pixels.
10. The method of claim 9, wherein the ventilation data comprises pressure data sampled for at least 1000 points in time and flow data sampled for at least 1000 points in time.
11. A computer-implemented method for identifying a condition from ventilation data, the method comprising:
acquiring ventilation data and control parameters of ventilation of a patient during a period of time;
converting the ventilation data into a human-interpretable image having a defined layout, the defined layout comprising a plurality of layout parts including a first part for a graphical representation of ventilation data and a second part for a graphical representation of control parameters;
providing the human-interpretable image as an input to a trained machine learning model, wherein the trained machine learning model is trained based on images having the defined layout; and
based on output from the trained machine learning model, a predicted condition of the patient is generated.
12. The computer-implemented method of claim 11, wherein the ventilation data comprises at least one of pressure data, flow data, or volume data.
13. The computer-implemented method of claim 12, wherein the first portion comprises a graphical representation of the pressure data in a first color and a graphical representation of flow in a second color.
14. The computer-implemented method of claim 13, wherein the graphical representation of pressure data is a graph of pressure data versus time over the period of time.
15. The computer-implemented method of claim 11, wherein the graphical representation of the control parameter is represented as bars, wherein a height of each bar represents a value of the control parameter.
16. The computer-implemented method of claim 11, wherein the defined layout includes a third portion of proportions for the ventilation data.
17. A computer-implemented method for identifying a condition from ventilation data, the method comprising:
acquiring ventilation data of a patient ventilated during a period of time;
generating an image based on the acquired ventilation data;
providing the generated image as input to a trained machine learning model, wherein the trained machine learning model is trained based on images having the same type as the generated image; and
based on output from the trained machine learning model, a predicted condition of the patient is generated.
18. The computer-implemented method of claim 17, wherein the ventilation data comprises at least one of pressure data, flow data, or volume data.
19. The computer-implemented method of claim 17, wherein the image is a human interpretable image.
20. The computer-implemented method of claim 17, wherein the image is a human-non-interpretable image.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US63/127,892 | 2020-12-18 | ||
| US17/518,286 | 2021-11-03 | ||
| US17/518,286 US12061670B2 (en) | 2020-12-18 | 2021-11-03 | Machine-learning image recognition for classifying conditions based on ventilatory data |
| PCT/US2021/062294 WO2022132509A1 (en) | 2020-12-18 | 2021-12-08 | Machine-learning image recognition for classifying conditions based on ventilatory data |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN116830211A true CN116830211A (en) | 2023-09-29 |
Family
ID=88143270
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202180083015.7A Pending CN116830211A (en) | 2020-12-18 | 2021-12-08 | Machine learning image recognition for classifying conditions based on ventilation data |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN116830211A (en) |
-
2021
- 2021-12-08 CN CN202180083015.7A patent/CN116830211A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| RU2712749C2 (en) | Systems and methods for optimizing artificial pulmonary ventilation based on model | |
| ES2243282T3 (en) | FAN MONITORING SYSTEM. | |
| US12106838B2 (en) | Systems and methods for respiratory support recommendations | |
| CN103565418B (en) | Method, equipment for the clinical state of supervision object | |
| JP2007083032A (en) | Apparatus and method for determining and displaying functional residual capacity data and associated parameters for a patient undergoing ventilation | |
| US20140150796A1 (en) | System and method for detecting minimal ventilation support with proportional assist ventilation plus software and remote monitoring | |
| US11559644B2 (en) | Process and adjusting device for adjusting a ventilation parameter as well as medical system | |
| US20140190485A1 (en) | System and method for detecting minimal ventilation support with volume ventilation plus software and remote monitoring | |
| CN116600844A (en) | Respiratory support equipment, monitoring equipment, medical equipment system and parameter processing method | |
| US20140150795A1 (en) | System and method for detecting double triggering with remote monitoring | |
| JP2013543389A (en) | Intuitive indication of ventilation effectiveness | |
| US12061670B2 (en) | Machine-learning image recognition for classifying conditions based on ventilatory data | |
| CN117597083A (en) | Endotracheal tube size selection and insertion depth estimation using statistical shape modeling and virtual fitting | |
| US20240354372A1 (en) | Machine-learning image recognition for classifying conditions based on ventilatory data | |
| JP2026042920A (en) | Image analysis device and program | |
| JP6050765B2 (en) | System and method for diagnosis of central apnea | |
| US20150013674A1 (en) | System and method for monitoring and reporting status of a ventilated patient | |
| CN117642201A (en) | Medical ventilation equipment and ventilation control method | |
| CN116830211A (en) | Machine learning image recognition for classifying conditions based on ventilation data | |
| JP7449065B2 (en) | Biological information processing device, biological information processing method, program and storage medium | |
| US20230245768A1 (en) | Family ventilator dashboard for medical ventilator | |
| de Carvalho et al. | Enhancing mechanical ventilation management with AI: Computer vision for automated detection of ventilatory modes, parameters and asynchrony | |
| US12539380B2 (en) | Digital twin of lung that is calibrated and updated with mechanical ventilator data and bed-side imaging information for safe mechanical ventilation | |
| US20260034323A1 (en) | Methods and systems for ventilation system monitoring | |
| EP4480521A1 (en) | Ventilation device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |