WO2007129582A1 - 車両周辺画像提供装置及び車両周辺画像提供方法 - Google Patents
車両周辺画像提供装置及び車両周辺画像提供方法 Download PDFInfo
- Publication number
- WO2007129582A1 WO2007129582A1 PCT/JP2007/058961 JP2007058961W WO2007129582A1 WO 2007129582 A1 WO2007129582 A1 WO 2007129582A1 JP 2007058961 W JP2007058961 W JP 2007058961W WO 2007129582 A1 WO2007129582 A1 WO 2007129582A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- vehicle
- edge
- coordinate
- dimensional object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/20—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/22—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
- B60R1/23—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
- B60R1/27—Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/04—Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/10—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
- B60R2300/102—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/30—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
- B60R2300/303—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/60—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
- B60R2300/607—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective from a bird's eye viewpoint
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R2300/00—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
- B60R2300/80—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
- B60R2300/8093—Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
Definitions
- the present invention relates to a vehicle surrounding image providing apparatus and a vehicle surrounding image providing method.
- a vehicle periphery image providing device that captures images of a vehicle periphery with a plurality of cameras, generates an image as if the vehicle was viewed from a virtual viewpoint by performing coordinate conversion on the obtained image, and presents the image to the driver. It has been known.
- Such a vehicle peripheral image providing device provides the driver with a bird's-eye view image that generates a large number of bird's-eye view images using the reference plane for coordinate conversion as the ground, and thus the position of the white line on the ground and the vehicle's position. The relationship is made to be recognized objectively by the driver, and parking driving and width driving are supported (for example, see Patent Document 1).
- the present invention has been made to solve such a conventional problem, and its purpose is to eliminate difficulty in seeing joint portions of a plurality of images while maintaining the safety of the driver.
- An object of the present invention is to provide a vehicle surrounding image providing apparatus and a vehicle surrounding image providing method.
- Patent Document 1 Japanese Unexamined Patent Publication No. 2003-169323
- the vehicle periphery image providing device provides a driver with an image of the vehicle periphery.
- This vehicle periphery image providing device includes a plurality of photographing means, image processing means, edge detection means, determination means, and image processing means.
- the plurality of photographing means image the periphery of the vehicle and each image in a different direction.
- Image processing The processing unit generates a coordinate conversion image by performing coordinate conversion on each of the images around the vehicle captured by a plurality of imaging units using the ground as a reference plane, and generates an overhead image by combining the generated coordinate conversion images.
- the edge detection means performs edge detection on the bird's-eye view image generated by the image processing means.
- the determination means determines the continuity of the edge lines that cross the joints of the coordinate conversion image among the edge lines detected by the edge detection means.
- the image processing means brings the edge line on the far side from the own vehicle closer to the own vehicle! Image processing is performed continuously at the joint.
- the edge line straddling the joint of the coordinate conversion image is discontinuous, it is far from the own vehicle !, the side edge line is close to the own vehicle !, and the edge line is aligned with the side edge line. Is made to be continuous at the joint. That is, according to the present invention, correction is performed by using image processing to provide continuity to the edge line at the joint of the image, and the amount of deviation between the actual position and the position on the image is determined by looking at the vehicle power. Relatively small! Close to your vehicle! Match other edge lines to the side edge line. Therefore, according to the present invention, it is possible to eliminate difficulty in seeing the joint portions of a plurality of images while maintaining the driver's safety.
- FIG. 1 is a configuration diagram of a vehicle periphery image providing apparatus according to a first embodiment of the present invention.
- FIG. 2 is a diagram showing an example of an overhead image obtained by coordinate conversion of the image conversion unit shown in FIG.
- FIG. 3 is a diagram showing details of the edge detection function of the image detection unit shown in FIG. 1, and shows an example of an overhead image.
- FIG. 4 is a diagram showing the principle of three-dimensional judgment by the three-dimensional judgment function of the image detection unit shown in FIG. 1.
- (a) shows the principle of judgment of a three-dimensional object with ground force
- (b) shows three-dimensional objects existing in the air. It shows the principle of judging things.
- FIG. 5 is a flowchart showing a detailed operation of the vehicle periphery image providing apparatus according to the first embodiment.
- FIG. 6 is a diagram showing a three-dimensional object detected by the process of step ST2 shown in FIG.
- FIG. 7 is a flowchart showing details of the first image correction process (ST5) shown in FIG.
- FIG. 8 is a diagram showing details of the process of step ST52 shown in FIG. 7, (a) shows an image before the process of step ST52, and (b) is a diagram when executing the process of step ST52. An image is shown, and (c) shows an image after the processing of step ST52 is executed.
- FIG. 9 is a diagram showing details of the processing of step ST53 and step ST54 shown in FIG. 7, (a) shows an image before the processing of step ST53, and (b) executes the processing of step ST53. (C) shows the image after execution of the process of step ST54.
- FIG. 10 is a flowchart showing details of the second image correction process (ST7) shown in FIG.
- FIG. 11 is a diagram showing details of the flowchart shown in FIG. 10, where (a) is an image of the guardrail before executing the process shown in FIG. 10, and (b) is the process of step ST71. (C) is the first image of the guardrail after the processing of step ST71, and (d) is the second image of the guardrail after the processing of step ST71. It is an image.
- FIG. 12 is a diagram showing a state of the third image correction process, (a) showing an image image before the third image correction process, and (b) showing an image image after the third image correction process. .
- FIG. 13 is a flowchart showing details of the third image correction process (ST8) shown in FIG. 5.
- FIG. 14 is a diagram showing details of the flowchart shown in FIG. 13, wherein (a) is an image image showing details of step ST81, and (b) is an image of another vehicle after the processing of step ST82 is executed. (C) is an image of another vehicle after the process of step ST84 is executed.
- FIG. 15 is a view showing an overhead image after the processing shown in FIG. 5 is executed, (a) shows an example of the first overhead image, and (b) shows an example of the second overhead image. ing.
- FIG. 16 A diagram illustrating a modification of the vehicle periphery image providing device according to the first embodiment, wherein (a) shows the principle of obtaining the thickness of a three-dimensional object, and (b) is a correction process corresponding to the thickness. ⁇ An example of an eye view image is shown.
- FIG. 17 is a diagram for explaining the processing of the vehicle periphery image providing apparatus according to the second embodiment, and is for explaining the processing that is executed in place of the processing of step ST3 shown in FIG.
- FIG. 18 is a first diagram showing an edge detection method of the vehicle periphery image providing device according to the second embodiment.
- FIG. 19 is a second diagram showing an edge detection method of the vehicle periphery image providing device according to the second embodiment.
- FIG. 1 is a configuration diagram of a vehicle periphery image providing apparatus according to the first embodiment of the present invention.
- the vehicle periphery image providing device 1 provides a driver with images around the vehicle, and includes a plurality of camera modules 10, a vehicle speed sensor 20, a steering angle.
- a sensor 30, a shift signal sensor 40, an image processing device 50, a monitor 60, and a speaker 70 are provided.
- the plurality of camera modules 10 are used for photographing the periphery of the host vehicle, and for example, a CCD camera or a CMOS camera is used. As shown in FIG. 1, the camera module 10 is composed of two parts, a first camera module 10a and a second camera module 10b, each of which captures image data obtained by imaging in different directions. It is configured to send to 50. Note that the camera module 10 is not limited to two, and at least two camera modules 10 may be provided.
- the vehicle speed sensor 20 detects the vehicle speed of the host vehicle.
- the steering angle sensor 30 detects the steering angle of the host vehicle.
- the shift signal sensor 40 detects the shift position (gear position) of the host vehicle.
- the image processing device 50 processes an image around the vehicle taken by the camera module 10, and includes a first input buffer 51a, a second input buffer 51b, a table storage unit 52, and an image conversion unit. 53, image detector 54, CPU55, output buffer 56 [0013]
- the first input buffer 51a receives and stores image data from the first camera module 10a.
- the second input buffer 51b receives and stores image data from the second camera module 10b.
- the table storage unit 52 stores an address conversion table for performing coordinate conversion of an image around the vehicle taken by the camera module 10 using the ground as a reference plane.
- the image conversion unit 53 uses the address conversion table stored in the table storage unit 52 to generate a coordinate conversion image by performing coordinate conversion on each of the images around the vehicle photographed by the plurality of camera modules 10. .
- the image conversion unit 53 combines the generated coordinate conversion images to generate an overhead image that looks like the aerodynamic force of the host vehicle.
- FIG. 2 is a diagram showing an example of an overhead image obtained by coordinate conversion of the image conversion unit 53 shown in FIG.
- one camera module 10 is provided at each of the front, both sides, and the rear of the vehicle so that four directions can be photographed.
- the front image 101 is obtained by the camera module 10 on the front side.
- the front image 101 shows a vehicle body 101a of the host vehicle and a front parking frame 101b.
- the rear image 102 is obtained by the camera module 10 on the rear side.
- the rear image 102 shows the vehicle body 102a of the host vehicle and the front parking frame 102b.
- the right side image 103 is obtained by the right side camera module 10
- the left side image 104 is obtained by the left side camera module 10.
- the right side image 103 shows the vehicle body 103a of the host vehicle and the front parking frame 103b
- the left side image 104 shows the vehicle body 104a of the host vehicle and the front parking frame 104b! .
- the image conversion unit 53 performs coordinate conversion on each of the images 101 to 104 based on the address conversion table stored in the table storage unit 52. That is, the image conversion unit 5 generates a coordinate conversion image 201 in which the front image 101 is coordinate-transformed and the front side of the host vehicle is viewed from above, and similarly, the rear side of the host vehicle is viewed from the rear side from the rear image 102. A coordinate conversion image 202 is generated. In addition, a coordinate conversion image 203 is generated from the right side image 103 when the right side of the vehicle is viewed from above, and the left side of the vehicle is viewed from the sky from the left side image 104. A coordinate conversion image 204 is generated.
- the image conversion unit 53 synthesizes the generated coordinate conversion images 201 to 204, and generates an overhead image 205 in which the periphery of the host vehicle is viewed from above. Note that the image conversion unit 53 places the object 206 of the host vehicle in the center of the overhead image 205.
- the image detection unit 54 has an edge detection function and a three-dimensional determination function.
- the edge detection function is a function for performing edge detection on the overhead view image generated by the image conversion unit 53. In this edge detection function, color information or brightness information power edge detection of each pixel on the bird's-eye view image is performed.
- FIG. 3 is a diagram showing details of the edge detection function of the image detection unit 54 shown in FIG. 1, and shows an example of an overhead image.
- a wall 301, a pole 302, and a curb 303 are reflected.
- the curbstone 303 is also reflected in the coordinate conversion image 203 on the right side.
- a white line 304, a guardrail 305, and another vehicle 306 are shown in the coordinate conversion image 202 on the rear side.
- the guardrail 305 is also reflected in the coordinate conversion image 203 on the right side
- the white line 304 and the other vehicle 306 are also reflected in the coordinate conversion image 204 on the left side.
- the edge detection function performs edge detection for the overhead image 205 as shown in FIG.
- the curbstone 303, the guardrail 305, and the other vehicle 306 are misaligned at the joints a to d in the overhead image 205! /.
- the three-dimensional judgment function is a function for judging the presence and type of a three-dimensional object based on the edge state detected by the edge detection function. More specifically, the presence of a three-dimensional object is detected and the type is determined based on the state of edge shift detected by the edge detection function.
- Fig. 4 is a diagram showing the principle of three-dimensional judgment by the three-dimensional judgment function of the image detection unit 54 shown in Fig. 1, (a) shows the principle of judgment of a three-dimensional object that also has ground force, and (b) Shows the principle of judgment of the standing object.
- the installation location is restricted. This makes it difficult to install a plurality of camera modules 10 at the same height. As a result, the plurality of camera modules 10 can Each will have a different height.
- the first camera module 10a is installed above the second camera module 10b.
- the three-dimensional object existing from the ground for example, the curbstone 303
- each of the camera modules 10a and 10b recognizes the lower end position of the curb 303 as a point 303-1, but the upper end position is recognized differently.
- the first camera module 10a recognizes the upper end position of the curb 303 as the point 302-2 where the line 1 connecting the self 10a and the upper end position of the curb 303 contacts the ground.
- the second camera module 10b recognizes the upper end position of the curb 303 as a point 3 03-3 where the line 2 connecting the self 10b and the upper end position of the curb 303 is in contact with the ground.
- the curbstone 303 is displaced at the edge as shown in FIG. 3 at the joint a of the overhead image 205. End up. That is, as shown in Fig. 4 (a), the curb 303 is represented as an object at a distance L1 from the point 303-1 to the point 302-2 in one coordinate transformation image, and the curb stone in the other coordinate transformation image. 303 is represented as an object with distance L2 (> L1) from point 303-1 to point 303-3. And since each coordinate conversion image is synthesize
- a three-dimensional object existing in the air for example, a guardrail 305
- the camera modules 10a and 10b have different recognition of the lower end position and the upper end position of the guard rail 305.
- the first camera module 10a recognizes the lower end position of the guard rail 305 as the point 305-1 where the line 1 connecting itself 10a and the lower end position of the guard rail 305 is in contact with the ground. It is recognized that the line 2 connecting 10a and the upper end position of the guardrail 305 is a point 305-2 that contacts the ground.
- the second camera module 10b recognizes the lower end position of the guard rail 305 as the point 305-3 where the line 3 connecting the self 10b and the lower end position of the guard rail 305 contacts the ground.
- the top position of the self 10b and the top position of the guardrail 305 Recognize that the connecting line 4 is the point 305-4 where it touches the ground.
- the guard rail 305 is shifted at the edge as shown in Fig. 3 at the joint b of the overhead image 205. End up. That is, as shown in FIG. 4 (a), the guardrail 305 is represented as an object at a distance L3 from point 305-1 to point 305-2 in one coordinate transformation image, and the guardrail 305 is represented in the other coordinate transformation image. Is represented as an object with distance L4 (> L3) from point 305-3 to point 305-4. Since the images are combined in such a state, the edges of the two images are shifted at the joint of the overhead image 205.
- the three-dimensional determination function is the state of the edge across the joints a to d of the overhead image 205 by combining the coordinate conversion images 201 to 204 among the edges detected by the edge detection function. From this, it is determined whether or not the edge is a three-dimensional object. Furthermore, when the solid object judgment function determines that the edge is of a three-dimensional object, the three-dimensional object has a force with which ground force exists due to the difference shown in Figs. 4 (a) and (b). It is judged whether it exists in the air or a mixture of both.
- the CPU 55 controls the entire image processing apparatus 50, and controls the image conversion unit 53 and the image detection unit 54.
- the ASIC Application Specinc Integrated Circuit
- LSI Large Scale Integrated circuit
- FPGA Field Programmable Gate Array
- DSP Digital Signal Processor
- the output buffer 56 stores the bird's-eye view image that has undergone coordinate conversion by the image conversion unit 53.
- the output buffer 56 is configured to output the stored overhead image information to the monitor 60.
- the monitor 60 displays an overhead image obtained by coordinate conversion.
- the monitor 60 is configured to highlight the solid object when the image detection unit 54 determines that the edges at the joints a to d are those of the solid object.
- the monitor 60 for example, superimposes the overlay image on the three-dimensional object on the overhead image, and displays the three-dimensional object with emphasis.
- the speaker 70 has a solid edge at the joints a to d by the image detection unit 54. If it is determined that the three-dimensional object is present, the driver is notified of the presence of the solid object with a predetermined voice. Specifically, the speaker 70 notifies the driver of the presence of a three-dimensional object with a beep sound such as “beep, beep, beep” or a conversation sound such as “There is an obstacle on the left side”.
- a beep sound such as “beep, beep, beep”
- a conversation sound such as “There is an obstacle on the left side”.
- the plurality of camera modules 10 shoot around the host vehicle.
- the image conversion unit 53 generates a plurality of coordinate conversion images by performing coordinate conversion on the image captured by the camera module 10, and creates an overhead image by combining the generated coordinate conversion images.
- the image detection unit 54 performs edge detection on the overhead image 205 by the edge detection function. After performing edge detection, the image detection unit 54 extracts edges that cross the joints a to d. Next, the image detection unit 54 extracts an edge having a shift among the edges straddling the joints a to d. In other words, the image detection unit 54 determines whether or not the edge is a three-dimensional object by extracting the edge having a gap among the edges crossing the joints a to d.
- the image detection unit 54 determines whether or not the extracted edges, that is, the edges of the three-dimensional object, are the same object. Since the edges of different three-dimensional objects may be represented by the overhead image 205 so as to cross the joints a to d by chance, the image detection unit 54 determines whether or not the same object is present. At this time, the image detection unit 54 extracts the coordinate conversion images 201 to 204 that form the joints a to d where the edges straddle, and the brightness or color of the three-dimensional object in the coordinate conversion images 201 to 204 is extracted. The information is compared to determine whether the three-dimensional objects in the coordinate conversion images 201 to 204 are the same object.
- the image detection unit 54 determines whether the three-dimensional object is present from the ground, the one existing in the air, or a mixture of both. .
- the image detection unit 54 determines these types based on the following.
- the image detection unit 54 determines that the edge straddling the joints a to d is that of a three-dimensional object
- the image detection unit 54 uses the three-dimensional object to approach the edge of the side closer to the vehicle for each coordinate conversion image.
- the image detection unit 54 uses the three-dimensional object to approach the edge of the side closer to the vehicle for each coordinate conversion image.
- the solid object also has ground force.
- the image detection unit 54 determines that the edges at the joints a to d are those of a three-dimensional object, the edge closer to the vehicle power for each coordinate conversion image of the three-dimensional object is a joint a It is determined that a three-dimensional object is present in the air when it is not continuous and is a straight line or a curve that is not continuous and is near the vehicle and has no refraction point.
- the image detection unit 54 determines that the three-dimensional object exists in the air in the above case.
- the image detection unit 54 determines that the edges at the joints a to d are those of a three-dimensional object, the edge on the side closer to the vehicle power for each coordinate conversion image for the three-dimensional object is a joint a
- a mixture of both a solid object with ground force and an object in the air It is determined that it is what is to be (hereinafter referred to as a mixture).
- the image conversion unit 53 corrects the overhead image 205 according to the type of the solid object. At this time, the image conversion unit 53 corrects the image so as to eliminate the edge shift across the joints a to d. Further, the monitor 60 highlights the solid object, and the speaker 70 notifies the driver of the presence of the three-dimensional object with a predetermined voice.
- FIG. 5 is a flowchart showing a detailed operation of the vehicle periphery image providing apparatus 1 according to the first embodiment.
- the image detection unit 54 performs edge detection on the overhead image 205 as shown in FIG. 5 (ST1).
- edge detection it is also possible to perform edge detection on images taken by each camera module 10 and to convert the coordinates of these images.
- the image detection unit 54 detects edges (edges where misalignment occurs) without continuing the joints a to d (ST2). Thereby, the image detection unit 54 detects a three-dimensional object.
- FIG. 6 is a diagram showing a three-dimensional object detected by the process of step ST2 shown in FIG.
- the process of step ST2 is detected.
- the curb 303, the guard, the Lenore 305, and the other vehicle 306 are detected.
- the wall 301 and the poinole 302 do not straddle the joints a to d, and thus are not detected in the process of step ST2, and the white line 304 is the joint c.
- the force that straddles the edge is misaligned, so it is detected in step ST2!
- the image detection unit 54 designates one of the three-dimensional objects. That is, in the example of FIG. 6, the image detection unit 54 selects any one of the curb 303, the guardrail 305, and the other vehicle 306, and performs subsequent steps ST3 to S. The process of T9 will be executed.
- the image detection unit 54 After selecting any one of the three-dimensional objects, the image detection unit 54 compares the luminance or color information of the three-dimensional object in the coordinate conversion images 201 to 204 spanned by the three-dimensional object. It is determined whether or not the solid objects in the coordinate conversion images 201 to 204 are the same object (ST3). As a result, even if the edges of different three-dimensional objects are accidentally represented in the overhead image 205 so as to cross the joints a to d, it is possible to prevent the different three-dimensional objects from being determined as the same object.
- step ST3: NO the process proceeds to step ST9.
- the image detecting unit 54 is close to the own vehicle and the side edge is close to the three-dimensional object. It is determined whether or not they are continuous (ST4).
- the image detection unit 54 determines that the three-dimensional object also has ground force. . That is, like the curbstone 303 shown in FIG. 6, since the edges 303a and 303b on the side close to the host vehicle are continuous while being a three-dimensional object, it is determined that the three-dimensional object also has ground force.
- the image conversion unit 53 executes a first image correction process and corrects the overhead image (ST 5). Then, the process proceeds to step ST9.
- the image detection unit 54 makes a straight line with no refraction point at each edge close to the host vehicle for the three-dimensional object. Alternatively, it is determined whether the curve is correct (ST6).
- the image detection unit 54 determines that the three-dimensional object is in the air It is judged that it exists. That is, as in the guardrail 305 shown in FIG. 6, since the edges 305a and 305b on the side close to the host vehicle are not continuous even though the object is a three-dimensional object, it is determined that the three-dimensional object exists in the air. In particular, since both the edges 305a and 305b are straight lines or curves having no refraction point, it is difficult to consider them as a mixture. Thereby, the image detection unit 54 determines that the three-dimensional object exists in the air.
- the image conversion unit 53 executes the second image correction process to correct the overhead image (ST 7). Then, the process proceeds to step ST9.
- the image detection unit 54 determines that the three-dimensional object is a mixture. To do. That is, unlike the other vehicle 306 shown in FIG. 6, since the edge 306a, 306b force S on the side close to the host vehicle is not continuous even though it is a three-dimensional object, it can be determined that the three-dimensional object exists in the air. . However, since the edges 306a and 306b have refraction points, there is a high possibility that they are a mixture of two or more three-dimensional objects. Therefore, the image detection unit 54 determines that the three-dimensional object is a mixture.
- the image conversion unit 53 executes a third image correction process to correct the overhead image (ST
- step ST9 the image detection unit 54 determines whether or not processing has been performed on all three-dimensional objects (ST9).
- ST9 the image detection unit 54 selects an unprocessed three-dimensional object, and the process proceeds to step ST3.
- ST9: YES the processing shown in FIG. 5 ends.
- FIG. 7 is a flowchart showing details of the first image correction process (ST5) shown in FIG. 5 .
- the image conversion unit 53 determines whether or not the three-dimensional object has reached the end of the overhead image 205 in any of the coordinate conversion images (ST51).
- FIG. 8 is a diagram showing details of the process of step ST52 shown in FIG. 7.
- (a) shows an image before the process of step ST T52
- (b) shows the process of step ST52.
- the image at the time of execution is shown
- (c) shows the image after the process of step ST52 is executed.
- FIG. 8 shows an image when the curbstone 303 is processed.
- the curbstone 303 has a continuous edge 303a, 303b force S on the side closer to the host vehicle, and does not have a continuous force on the far side edge 303c, 303d.
- the force one coordinate conversion image 20
- the edge 303c on the far side of the vehicle power reaches the end of the overhead image 205.
- the image conversion unit 53 reaches the end of the overhead image 205 in the process of step ST52, and reaches the edge of the overhead image 205 even for the curb 303 of the coordinate conversion image 203.
- the extension process is a continuous edge 303a, 303b force S on the side closer to the host vehicle, and does not have a continuous force on the far side edge 303c, 303d.
- the image conversion unit 53 plots pixels from the edge 303d on the far side of the vehicle power of the coordinate conversion image 203 to the end of the overhead image 205, reaching the end of the overhead image 205. The process to do is performed. At this time, the image conversion unit 53 plots the pixels in a direction perpendicular to the edge 303b on the side closer to the own vehicle force.
- the image conversion unit 53 the straight line passing through the coordinates (Xb, Yb) and the coordinates (Xc, Yc), the edge 303d, the joint a, and the end of the overhead image 205 A region 400 surrounded by the part is obtained, and processing for filling the region 400 is performed.
- the image conversion unit 53, the line segment connecting the coordinates (Xa, Ya) and the coordinates (Xe, Ye) of the joint a, and the coordinates (Xa, Ya) of the joint a Find the ratio of the length to the line connecting the coordinates (Xd, Yd). Then, the image conversion unit 53 performs a process of enlarging the pixels of the curb 303 of the coordinate conversion image 203 in the direction perpendicular to the side edge 303b near the host vehicle according to the length ratio.
- the bird's-eye view image 205 is processed so as to fill the area 400 shown in FIG. 8B, and both coordinate-converted images 201 are processed as shown in FIG. 8C.
- 203 can reach the end of the overhead image 205 so that the driver can easily recognize that the object is the same object when referring to the overhead image 205 on the monitor 60.
- step ST53 the image conversion unit 53 performs a process of painting with a predetermined color on a portion of the overhead image 205 in which the pixel information is lost by the process of step ST 53 (ST54). Thereafter, the processing shifts to step ST9 shown in FIG.
- FIG. 9 is a diagram showing details of the processing of step ST53 and step ST54 shown in FIG. (A) shows the image before the process of step ST53, (b) shows the image after the process of step ST53, and (c) shows the image after the process of step ST54. An image is shown. In addition, FIG. 9 shows an image image obtained when processing the curbstone 303.
- the edge 303a, 303b force S is continuous from the curbstone 303 ⁇ to the own vehicle force, and the far side edges 303c, 303d are not continuous. For this reason, the image converting section 53 continues the step ST53 [Koo! / Kite!] By connecting the edges 303c and 303d of the far-away rule.
- the image conversion unit 53 executes a shift process for moving an edge in the overhead image. That is, the image conversion unit 53 calculates the distance from the coordinates (Xe, Ye) to the coordinates (Xd, Yd), and shifts the edge 303c on the side where the vehicle power of the coordinate conversion image 201 is far away by the above distance ( Figure 9 (b)). Then, the image conversion unit 53 discards the three-dimensional part far from the shifted edge 303c, that is, the pixel data in the discard area shown in FIG. 9 (c).
- the image conversion unit 53 performs at least one of thinning processing for thinning out pixels of a three-dimensional object in the coordinate conversion image and compression processing for compressing the three-dimensional object in the coordinate conversion image. Execute. That is, the image conversion unit 53 connects the line segment connecting the coordinate (Xa, Ya) and the coordinate (Xe, Ye) of the joint a with the coordinate (Xa, Ya) and the coordinate (Xd, Yd) of the joint a. Find the ratio of the length to the line. Then, the image conversion unit 53 performs a thinning process for thinning out the pixels of the curb 303 of the coordinate conversion image 201 according to the length ratio, or a compression process for compressing the curb 303 of the coordinate conversion image 201.
- the distances 303c and 303d of the far and away rules of both coordinate-transformed images 201 and 203 are connected to a continuous line a [koo! It is possible to make it easy for the user who has seen the saddle image 205 to recognize the curbstone 303 so that the driver does not lose the sense of distance from the curbstone 303 when the driver views the overhead image 205 on the monitor 60. it can.
- the image conversion unit 53 executes any one of shift processing, thinning-out processing, and compression processing, and in advance, in the portion of the overhead image 205 where the pixel information is lost (discarded area in Fig. 9 (c)).
- a process of painting with a predetermined color is executed.
- the image conversion unit 53 can express the overhead image 205 without feeling uncomfortable by coloring the same color as the ground.
- the image conversion unit 53 can notify the user who has viewed the overhead image 205 that the image correction has been performed by coloring a color that does not exist on the ground (for example, red or blue).
- the far edge 303c of the far side edges 303c and 303d for each coordinate conversion image is processed so as to be continuous with the near edge 303d!
- the present invention is not limited to this, and the processing may be performed so that the near edge 303d is continuous with the far edge 303c.
- the curbstone 303 is displayed in a large size in the overhead image 205, but the overhead image 205 becomes unnatural due to coloring without coloring the portion that lost the pixel information as in step ST54. The possibility can be reduced. Further, the process of step ST54 is not necessary, and the process can be simplified.
- FIG. 10 is a flowchart showing details of the second image correction process (ST7) shown in FIG. 5 .
- the image conversion unit 53 continues the edge closer to the host vehicle for each coordinate conversion image (ST71).
- the image conversion unit 53 compares the edges of the three-dimensional object that are closer to the own vehicle force for each coordinate conversion image.
- the image conversion unit 53 selects the three-dimensional image on the coordinate conversion image so that the edge far from the host vehicle among the compared edges is continuous with the edge closer to the host vehicle at the joints a to d. Move things.
- FIG. 11 is a diagram showing details of the flowchart shown in FIG. 10, (a) is an image of the guardrail 305 before the processing shown in FIG. 10 is executed, and (b) is a step ST71. (C) is the first image of guardrail 305 after executing the process of step ST71, and (d) is the image of step ST71. It is a second image of the rear guardrail 305.
- the image conversion unit 53 causes the edges 305a and 305b on the side closer to the own vehicle to continue in the process of step ST71.
- the image conversion unit 53 calculates the distance from the coordinates (Xb, Yb) to the coordinates (Xa, Ya) shown in FIG. Shift the guardrail by 305 (Fig. 11 (b)).
- the farther edge 305a is aligned with the closer edge 305b, and both edges 305a and 305b It will be continuous at joint b.
- the image conversion unit 53 determines whether the three-dimensional object reaches the end of the overhead image 205 in any of the coordinate conversion images. It is determined whether or not (ST72).
- ST72 determines whether or not the image conversion unit 53 reaches the end of the overhead image 205.
- the extension processing is performed so that the three-dimensional object of the coordinate conversion image that has not been reached reaches the end of the overhead image 205 (ST73).
- ST73 the processing shifts to step ST9 shown in FIG.
- the process in step ST73 is the same as the process in step ST52 shown in FIG.
- the image conversion unit 53 determines that the vehicle power is closer. While maintaining the continuous state of the edges 305a and 305b, the overhead image is processed so that the three-dimensional object is distant from the vehicle for each coordinate conversion image and the side edges are continuous at the joints a to d ( ST74). After that, the image conversion unit 53 performs a process of painting with a predetermined color on the portion of the bird's-eye view image 205 in which the pixel information is lost by the process of step ST74 (ST75). Thereafter, the processing shifts to step ST9 shown in FIG.
- FIG. 11 (b) even if the guard rail 305 of the coordinate conversion image 203 is moved to be close to the own vehicle and the side edges 305a and 305b are made continuous, the edges 305c and 305d on the side far from the own vehicle Are not continuous. Specifically, the contact point between the edge 305c and the joint b when the guardrail 305 is moved in the coordinate conversion image 203. The mark is (Xd, Yd). On the other hand, in the coordinate conversion image 202, the contact coordinates between the edge 305d on the side where the host vehicle force is far and the joint b are (Xc, Yc). As described above, even if the process of step ST71 is performed, the edges 305c and 305d on the side far from the host vehicle are not continuous.
- the image conversion unit 53 performs a process of making the edges 305 c and 305 d farther from the host vehicle continuous in step ST74. At this time, the image converting unit 53 makes the far-law edges 305c and 305d continuous at the joint b in the same manner as in step ST53 shown in FIG.
- step ST75 the image conversion unit 53 executes a shift process, a thinning process, and a compression process, so that the pixel information is lost in the overhead image 205 (see (c) of Fig. 11).
- a process of painting with a predetermined color on the hatched portion is executed.
- the processing in step ST 75 is the same as the processing in step ST 54 shown in FIG.
- the image conversion unit 53 may perform processing as shown in FIG. 11 (d).
- the image conversion unit 53 connects the far edge 305a to the near edge 305b among the close edges 305a and 305b of each coordinate conversion image, and the coordinate conversion image Of the rules No, Noji 305c and 305d, 3 £ 1, Noji 305 (1 is continued to 1 and ⁇ No 5 / Di3 05c.
- the guardrail 305 is displayed in the overhead view image 205. Although displayed large, it is possible to reduce the possibility that the bird's-eye view image 205 becomes unnatural due to the coloring without coloring the area where the pixel information is lost as in step ST75. The processing of ST75 becomes unnecessary, and the processing can be simplified.
- FIG. 12 is a diagram showing the state of the third image correction process. (A) shows an image image before the third image correction process, and (b) shows an image image after the third image correction process.
- the mixture consists of solid objects (tires) existing from the ground and solid objects (vehicle bodies) existing in the air!
- the edge 306a is moved as in the first and second image correction processes. If the processing is performed, the edge 306a may exist on the vehicle side with respect to the contact point a between the front wheel and the ground. In other words, in the case of mixed objects, it is necessary to perform processing such as moving edges only for three-dimensional objects that exist in the air without moving edges for three-dimensional objects that have ground forces. More specifically, the edges 306a and 306b must not exist on the vehicle side of the tangent line connecting the contact point ⁇ between the front wheel and the ground and the contact point ⁇ between the rear wheel and the ground. Therefore, in the third image correction process, the edge of only the vehicle body of the coordinate conversion image 204 shown in FIG. 12 (a) is moved, and the edge 306a of each coordinate conversion image 202, 204 is moved as shown in FIG. , 306b must be continuous.
- FIG. 13 is a flowchart showing details of the third image correction process (ST8) shown in FIG.
- the image conversion unit 53 obtains the tangent line of the mixture (ST81).
- FIG. 14 is a diagram showing details of the flowchart shown in FIG. 13, where (a) is an image image showing details of step ST81, and (b) is another vehicle after the processing of step ST82 is executed. Both are images of 306, and (c) is an image of the other vehicle 306 after the process of step ST84 is executed.
- the image conversion unit 53 performs coordinate values (Xc, Yc) of the contact point between the front wheel and the ground, and coordinate values (Xd , Yd) and the tangent line that passes through the obtained coordinate value.
- the image conversion unit 53 continues the edges on the side closer to the own vehicle force for each coordinate conversion image (ST82). At this time, the image conversion unit 53 For each coordinate conversion image, the side edges are compared, and among the compared edges, the far edge from the own vehicle, the near edge is close to the own vehicle, and the continuous edges and the joints a to d are continuous. A solid object on the coordinate conversion image is compressed.
- the near-f-law edges 306a, 306b do not continue to force from the own vehicle! /. Specifically, of these edges 306a and 306b, the edge 306a with the farther vehicle power is in contact with the joint c at the coordinate (Xa, Ya), and the edge 306b with the nearer vehicle power is closer to the edge 306b. It is in contact with the joint c at coordinates (Xb, Yb). For this reason, the image conversion unit 53 connects the edges of the vehicle body in the process of step ST82. The other vehicle 306 of the coordinate conversion image 204 is compressed so as to continue. At this time, the image conversion unit 53 performs the compression process so that the other vehicle 306 does not exceed the tangent line.
- the image conversion unit 53 obtains the distance between the tangent line and the edge of the other vehicle 306.
- the distance between the tangent and the edge of the other vehicle 306 in the coordinate conversion image 204 is A, and the distance between the tangent and the edge of the other vehicle 306 in the coordinate conversion image 202. Is B ( ⁇ A).
- the image conversion unit 53 compresses the other vehicle 306 of the coordinate conversion image 204 to BZA times.
- the distance between the edge of the vehicle body and the tangent of each coordinate conversion image 202, 204 is B. Both edges 306a and 306b are in contact with the joint c at coordinates (Xb, Yb), and these edges 306a and 306b are continuous.
- step ST84 is executed, and the other vehicle 306 of the coordinate conversion image 204 is expanded to the image end of the overhead image 205, and FIG. It is expressed as shown. Even when the other vehicle 306 does not reach the image end, the process of step ST86 is executed, and the portion where the pixel information is lost is colored.
- FIG. 15 is a diagram showing an overhead image 205 after the processing shown in FIG. 5 is performed.
- (A) shows an example of the first overhead image 205
- (b) shows a second overhead image. 205 examples are shown.
- the edges of the curbstone 303, the guardrail 305, and the other vehicle 306 are continuous, making it easier to recognize a three-dimensional object as the same object compared to the overhead view image 205 shown in Fig. 3. ing.
- the edges of the curbstone 303, the guardrail 305, and the other vehicle 306 are also continuous even when coloring is not performed on the portion where the pixel information has been lost. It will be easier to recognize.
- the vehicle surrounding image providing device 1 determines the type of the three-dimensional object around the vehicle, performs an appropriate image correction process on the overhead image 205, and gives the driver a three-dimensional image around the vehicle. It makes it easy to recognize objects and solves problems such as loss of distance.
- FIG. 16 is a diagram for explaining a modification of the vehicle periphery image providing device 1 according to the first embodiment.
- (A) shows the principle for obtaining the thickness of a three-dimensional object
- (b) shows the thickness.
- An example of the bird's-eye view image 205 that has been subjected to the corresponding correction processing is shown.
- the vehicle surrounding image providing apparatus 1 according to the modification obtains the thickness and performs an image correction process on the overhead image 205 according to the thickness. Displayed on the monitor 60.
- the curb stone 303 is described as an example of the three-dimensional object.
- the first edge of the curb 30 3 exists at the position of the distance S from the camera module 10, and the second edge of the position at the distance S + I.
- the three-dimensional judgment function of the detection unit 54 detects the thickness 1 of the three-dimensional object from the above calculation formula.
- the image conversion unit 53 performs correction processing so that the curb stone 303 has a thickness of 1.
- the image converting unit 53 is located on the side close to the own vehicle so that the curb 303 has a thickness of 1 with the edge on the near side as the reference. Adjust the distance between the edge and the far edge, and make each edge continuous at the joint a. In this interval adjustment, the image conversion unit 53 performs shift processing, thinning processing, compression processing, and the like, and a portion where pixel information is lost is colored with a predetermined color.
- the three-dimensional object of the overhead image 205 can be expressed with an accurate thickness, and the three-dimensional object can be displayed in an easy-to-understand manner to prevent the driver from losing a sense of distance.
- the joint of the bird's-eye view image 205 obtained by synthesizing the coordinate conversion images 201 to 204 among the detected edges. From the state of the edge across a to d, it is determined whether or not the edge is a solid object. Here, if the edges straddling the joints a to d are three-dimensional objects, the edges will be displaced at the joints a to d due to the installation positions of the plurality of camera modules 10. In this way, from the state of the edge that straddles the joints a to d, it is possible to cut the carcass U whose edge is a solid object.
- the edge straddling the joints a to d is a three-dimensional object
- the edge of the three-dimensional object closer to the own vehicle force for each coordinate conversion image is continuous at the joints a to d.
- the edge of each side of the coordinate conversion image is misaligned at the joint. If both are mixed, the three-dimensional object is close to the own vehicle for each coordinate conversion image, the side edge is at the joints a to d! It is not configured like a straight line, but includes one or more refraction points. In this way, it is possible to determine from the state of the edge straddling the joint, whether the solid object has ground force, the force that exists in the air, or both.
- the luminance or color information of the three-dimensional object in the coordinate conversion images 201 to 204 constituting the joints a to d is compared.
- the type of the three-dimensional object is determined when it is determined that they are the same object.
- the edge across the joints a to d is a three-dimensional object
- the solid object is connected!
- the edge near the vehicle for each coordinate conversion image is close to the joints a to d,! /, Indicates that the edge on the near side of the vehicle is present on the ground. ing. That is, since the coordinate conversion is performed with the ground as a reference plane, there is no deviation with respect to what exists on the ground. Therefore, in the above case, it can be determined that the three-dimensional object also has ground force.
- the bird's-eye view image is processed so that the far-side edges of the three-dimensional object for each coordinate conversion image are continuous at the joint.
- the three-dimensional object has ground force, as shown in FIG. 4 (a)
- the amount of the three-dimensional object that falls down and is recognized differs depending on the installation position of the plurality of camera modules 10. come.
- the upper edge side of the three-dimensional object, that is, the far edge is represented as shifted in the joints a to d of the overhead image 205, and the driver easily loses the distance from the vehicle to the three-dimensional object. Therefore, it is possible to suppress the loss of distance by performing processing so that the distant edges are continuously connected to the joints a to d so as to eliminate this shift.
- the edges at the joints a to d are those of a three-dimensional object, the three-dimensional object is close to the own vehicle for each coordinate conversion image, and the side edge is connected to the joints a to d. It is determined that the three-dimensional object exists in the air when the edge that is not continuous and is close to the host vehicle force is a straight line or a curve with no refraction point.
- the side edge close to the host vehicle for each coordinate conversion image is connected to the eyes a to d! /, And is not continuous. This means that the edge near the host vehicle exists on the ground. It indicates that the part that does not.
- the edge includes the part that exists in the air.
- each of the edges close to the host vehicle is a straight line or a curved line without a refraction point, it means that the edge is constituted by two or more objects such as a vehicle tire and a vehicle body.
- an edge is composed of a single object. Therefore, there is little possibility of a mixture of a three-dimensional object existing on the ground and a three-dimensional object existing in the air. Therefore, in the above case, if a three-dimensional object exists in the air, it can be cut halfway.
- the three-dimensional object is compared with the edge on the side closer to the own vehicle force for each coordinate conversion image, and from the own vehicle among the compared edges.
- the three-dimensional object on the coordinate conversion image is moved so that the far edge is closer to the near edge, and the second edge and the joints a to d are continuous!
- the edge closer to the host vehicle for each coordinate conversion image is continuous.
- the vehicle power for each coordinate conversion image is also on the far side. This edge may still be misaligned.
- the three-dimensional object is maintained while maintaining the continuous state of the edge on the side where the vehicle power is close. Then, the bird's-eye view image 205 is processed so that each side edge of each coordinate-converted image is continuous at the joint. As a result, both the edge far from the vehicle and the edge on the near side are continuous, making it easier for the driver to recognize the three-dimensional object and suppressing the loss of distance.
- the edges at the joints a to d are those of a three-dimensional object, the three-dimensional object is close to the own vehicle for each coordinate conversion image, and the side edge is connected to the joints a to d. It is determined that a solid object exists in the air when it is not continuous and at least one of the edges closer to its own vehicle power has a refraction point. Here, it is close to the own vehicle for each coordinate conversion image, and the side edge is not connected to the joint, and this means that the near edge from the own vehicle and the side edge does not exist at least on the ground. Indicate that it is included. In other words, because the coordinates are transformed using the ground as a reference plane, there should be no deviation between the forces that exist on the ground.
- the edge contains the part that exists in the air!
- the edge may be composed of two or more objects, such as a vehicle tire and car body. It is highly possible that a solid object with a high height is a mixture of a solid object existing on the ground and a solid object existing in the air. Therefore, in the above case, it can be determined that the three-dimensional object is a mixture of the three-dimensional object existing on the ground and the three-dimensional object existing in the air.
- an edge closer to the own vehicle force for each coordinate transformation image is compared, and among the compared edges, the edge that is farther from the own vehicle is connected to the edge that is closer to the edge, and V is continuous. So that the three-dimensional object does not exceed the above contact point, Process things. As a result, the edges closer to the vehicle power for each coordinate conversion image are continuous. Since the three-dimensional object does not exceed the tangent line, the three-dimensional object in which both the object existing from the ground and the object existing in the air are mixed is processed beyond the line connecting the contact points with the ground. In 205, the three-dimensional object is not displayed on the own vehicle side from the line. In this case, there is a possibility that the edge on the side where the own vehicle force of each coordinate conversion image is far is still displaced.
- the overhead image 205 is processed so that the three-dimensional object is distant from the host vehicle and the edges on the side of the coordinate conversion images 201 to 204 are continuous at the joints a to d. .
- both the far side and the near side of the host vehicle force are continuous, and the driver can easily recognize the three-dimensional object.
- the three-dimensional object is prevented from being expressed on the vehicle side from the line, and the edges of the vehicle vehicle force closer side and the far side are continuous so that the driver can easily recognize the three-dimensional object. And the loss of distance can be suppressed.
- At least one of shift processing, thinning-out processing, and compression processing is executed, and processing for filling a portion of the overhead image 205 where pixel information is lost with a predetermined color is executed.
- processing for filling a portion of the overhead image 205 where pixel information is lost with a predetermined color is executed.
- a color that does not normally exist on the ground such as red
- the coordinate reaches the end of the overhead image 205 in any of the coordinate conversion images 201 to 204, the coordinate reaches the end of the overhead image 205, and the coordinates
- the three-dimensional objects of the converted images 201 to 204 are processed so as to reach the end of the overhead image 205. Accordingly, it is possible to display so that a three-dimensional object straddling the joints a to d can be easily recognized as the same object by performing a process of continuing the edge on the far side of the host vehicle.
- the thickness of the three-dimensional object is obtained and the obtained thickness is obtained. Accordingly, for each coordinate transformation image, the distance between the near edge and the far edge and the far edge is adjusted for each coordinate transformation image, and the far and side edges are connected to the joints a to d. Make it continuous. As described above, the distance between the edge on the near side and the edge on the far side of the coordinate conversion images 201 to 204 is adjusted according to the thickness of the three-dimensional object. It can be expressed accurately and can be displayed as if the three-dimensional object was actually viewed from above. Therefore, the three-dimensional object can be presented to the driver with ease. Furthermore, since each far edge is continuous at the joint, the gap between edges can be eliminated, and loss of distance can be suppressed.
- the vehicle periphery image providing device 2 according to the second embodiment is the same as that of the first embodiment, but the processing content is partially different.
- differences from the first embodiment will be described.
- the vehicle periphery image providing apparatus 2 according to the second embodiment is different from that of the first embodiment in the process of step ST3 shown in FIG. Further, the vehicle surrounding image providing device 2 according to the second embodiment performs edge detection on the entire bird's-eye view image 205, and therefore the first embodiment is that the partial force of the bird's-eye view image 205 also performs edge detection. Different from the ones.
- FIG. 17 is a diagram for explaining the processing of the vehicle surrounding image providing device 2 according to the second embodiment, and is for explaining the processing that is executed in place of the processing of step ST3 shown in FIG. .
- the plurality of camera modules 10 have their imaging regions overlapping each other, and the image conversion unit 53 discards the overlapping data when the overhead image 205 is generated.
- the overhead image 205 shows a curbstone 303. Since the curb 303 straddles the joint a, the curb 303 is also reflected in the front image 101 and the right side image 103, which are images before the coordinate conversion of the coordinate conversion images 201 and 203 constituting the joint a.
- the image conversion unit 53 converts all of the front image 101. A part of the front image 101 is discarded. Therefore, a part of the curb SOSi shown in the front image 101 that is completely coordinate-transformed (part 303 shown in FIG. 17) is discarded. Similarly, all the curbs 303 shown in the right side image 103 are also shown.
- the part that is transformed (the part 303 shown in FIG. 17) is discarded.
- the discarded portion is a portion where the front image 101 and the right side image 103 overlap. That is, the same part is shown in the front image 101 and the right side image 103, and the overlapping part is discarded in order to avoid obtaining the overhead image 205 by performing coordinate conversion by overlapping the same part. Yes.
- the image detection unit 54 compares the luminance or color information of the three-dimensional object existing in the overlapping portion, and determines whether or not the three-dimensional object is the same object. Thereby, if the standing object is the same object, the luminance or color information of the same part is compared, and it can be determined whether or not the same object force is more accurate.
- FIG. 18 is a first diagram illustrating an edge detection method of the vehicle surrounding image providing apparatus 2 according to the second embodiment
- FIG. 19 illustrates an edge detection method of the vehicle surrounding image providing apparatus 2 according to the second embodiment. It is the 2nd figure shown.
- an area within a certain distance from the host vehicle is a first area 401 and an area outside the first area 401 is a second area 402.
- the detection unit 54 does not detect the edge for the second region 402 but detects the edge for the first region 401.
- the image detection unit 54 does not detect the edge for the first region 401 but detects the edge for the second region 402.
- the edge detection function of the image detection unit 54 is farther from the host vehicle when the speed of the host vehicle is equal to or higher than the predetermined speed than when the speed of the host vehicle is lower than the predetermined speed. Edge detection is performed for a location.
- the size of the area may be variable without being limited to the case where fixed areas are set in advance such as the first and second areas 401 and 402.
- the traveling direction side of the host vehicle (particularly the region through which the vehicle body passes) is edged.
- the detection area 403 may be determined, edge detection may be performed on this area 403, and edge detection may not be performed in other areas. Thereby, edge detection can be performed for a three-dimensional object that may come into contact with the host vehicle.
- edge detection region 403 when the vehicle speed is low, the far side of the edge detection region 403 may be cut, and when the vehicle speed is high, the edge detection is performed for farther objects.
- the type can be determined for the three-dimensional object around the vehicle, as in the first embodiment. Moreover, it can be determined that the three-dimensional object also has ground force, and the loss of distance can be suppressed. Further, it can be determined that the three-dimensional object exists in the air, and it can be determined that the three-dimensional object is a mixture of the three-dimensional object existing on the ground and the three-dimensional object existing in the air. In addition, it is possible to further show the driver that the image has been processed, and to display the three-dimensional object that covers the joints a to d so as to be easily recognized as the same object. In addition, the three-dimensional object in the displayed bird's-eye view image can be easily notified to the driver, and the driver can be audibly notified of the presence or absence of the three-dimensional object.
- the image of the coordinate-converted image constituting the joints a to d before the coordinate transformation is obtained.
- the type of the three-dimensional object is determined.
- the imaging areas of the plurality of camera modules 10 overlap with the other camera modules 10, and the image conversion unit 53 discards the overlapping data when the overhead image 205 is generated.
- Each camera module 10 is connected to other camera modules in this overlap.
- the same part as Joule 10 is imaged. Therefore, by comparing the luminance or color information of the three-dimensional object that exists in the portion discarded when the overhead image 205 is generated, the luminance or color information of the same part is compared, and the three-dimensional object is more accurately the same. It is possible to determine whether or not the force is an object.
- edge detection is performed on a region farther from the host vehicle on the overhead image than when the speed of the host vehicle is less than the predetermined speed.
- the speed of the host vehicle is high, the driver needs to visually recognize a location relatively far from the host vehicle.
- the speed of the host vehicle is slow, the driver needs to visually check the vicinity of the host vehicle.
- edge detection can be performed on a three-dimensional object that should be visually recognized by the driver. Furthermore, the area for edge detection can be limited, and the processing load can be reduced.
- edge detection is performed on the area on the traveling direction side of the host vehicle on the bird's-eye view image, edge detection can be performed on a three-dimensional object that may come into contact with the host vehicle. Furthermore, the area where edge detection is performed can be limited, and the processing load can be reduced.
- the present invention can be applied to a vehicle periphery image providing apparatus that provides an image of the periphery of the vehicle to the driver.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mechanical Engineering (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
Abstract
Description
Claims
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN2007800166895A CN101438590B (zh) | 2006-05-09 | 2007-04-25 | 车辆周围图像提供装置和车辆周围图像提供方法 |
| US12/298,837 US8243994B2 (en) | 2006-05-09 | 2007-04-25 | Vehicle circumferential image providing device and vehicle circumferential image providing method |
| JP2008514434A JP4956799B2 (ja) | 2006-05-09 | 2007-04-25 | 車両周辺画像提供装置及び車両周辺画像提供方法 |
| EP07742396.0A EP2018066B1 (en) | 2006-05-09 | 2007-04-25 | Vehicle circumferential image providing device and vehicle circumferential image providing method |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2006130269 | 2006-05-09 | ||
| JP2006-130269 | 2006-05-09 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2007129582A1 true WO2007129582A1 (ja) | 2007-11-15 |
Family
ID=38667690
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2007/058961 Ceased WO2007129582A1 (ja) | 2006-05-09 | 2007-04-25 | 車両周辺画像提供装置及び車両周辺画像提供方法 |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US8243994B2 (ja) |
| EP (1) | EP2018066B1 (ja) |
| JP (1) | JP4956799B2 (ja) |
| CN (1) | CN101438590B (ja) |
| WO (1) | WO2007129582A1 (ja) |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009188635A (ja) * | 2008-02-05 | 2009-08-20 | Nissan Motor Co Ltd | 車両周辺画像処理装置及び車両周辺状況提示方法 |
| JP2010221980A (ja) * | 2009-03-25 | 2010-10-07 | Aisin Seiki Co Ltd | 車両用周辺監視装置 |
| JP2011109286A (ja) * | 2009-11-16 | 2011-06-02 | Alpine Electronics Inc | 車両周辺監視装置および車両周辺監視方法 |
| JP2015119225A (ja) * | 2013-12-16 | 2015-06-25 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
| WO2015098107A1 (ja) * | 2013-12-24 | 2015-07-02 | 京セラ株式会社 | 画像処理装置、警報装置、および画像処理方法 |
| KR102343052B1 (ko) * | 2021-06-17 | 2021-12-24 | 주식회사 인피닉 | 3d 데이터를 기반으로 2d 이미지에서의 객체의 이동 경로를 식별하는 방법 및 이를 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램 |
| JP2024028606A (ja) * | 2017-08-08 | 2024-03-04 | 住友建機株式会社 | 道路機械 |
Families Citing this family (30)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP4557041B2 (ja) * | 2008-04-18 | 2010-10-06 | 株式会社デンソー | 車両用画像処理装置 |
| DE102009029439A1 (de) * | 2009-09-14 | 2011-03-24 | Robert Bosch Gmbh | Verfahren und Vorrichtung zur Darstellung von Hindernissen in einem Einparkhilfesystem von Kraftfahrzeugen |
| EP2372637B1 (en) * | 2010-03-02 | 2013-07-03 | Autoliv Development AB | A driver assistance system and method for a motor vehicle |
| DE102010010912A1 (de) * | 2010-03-10 | 2010-12-02 | Daimler Ag | Fahrerassistenzvorrichtung mit optischer Darstellung erfasster Objekte |
| JP5479956B2 (ja) | 2010-03-10 | 2014-04-23 | クラリオン株式会社 | 車両用周囲監視装置 |
| JP2011205513A (ja) * | 2010-03-26 | 2011-10-13 | Aisin Seiki Co Ltd | 車両周辺監視装置 |
| CN102653260B (zh) * | 2012-05-09 | 2014-08-13 | 邝君 | 汽车定点定位引导系统 |
| WO2014019132A1 (en) | 2012-07-31 | 2014-02-06 | Harman International Industries, Incorporated | System and method for detecting obstacles using a single camera |
| US9150156B2 (en) | 2012-08-30 | 2015-10-06 | Nissan North America, Inc. | Vehicle mirror assembly |
| US9288446B2 (en) | 2012-09-10 | 2016-03-15 | Nissan North America, Inc. | Vehicle video system |
| US9057833B2 (en) | 2012-09-28 | 2015-06-16 | Nissan North America, Inc. | Vehicle mirror assembly |
| CN104469135B (zh) * | 2013-09-18 | 2017-11-28 | 株式会社理光 | 图像处理系统 |
| JP6167824B2 (ja) * | 2013-10-04 | 2017-07-26 | アイシン精機株式会社 | 駐車支援装置 |
| JP6307895B2 (ja) * | 2014-01-23 | 2018-04-11 | トヨタ自動車株式会社 | 車両用周辺監視装置 |
| JP6303090B2 (ja) * | 2014-03-24 | 2018-04-04 | アルパイン株式会社 | 画像処理装置および画像処理プログラム |
| US9403491B2 (en) | 2014-08-28 | 2016-08-02 | Nissan North America, Inc. | Vehicle camera assembly |
| US9834141B2 (en) | 2014-10-28 | 2017-12-05 | Nissan North America, Inc. | Vehicle object detection system |
| US9880253B2 (en) | 2014-10-28 | 2018-01-30 | Nissan North America, Inc. | Vehicle object monitoring system |
| US9725040B2 (en) | 2014-10-28 | 2017-08-08 | Nissan North America, Inc. | Vehicle object detection system |
| JP6485160B2 (ja) * | 2015-03-27 | 2019-03-20 | セイコーエプソン株式会社 | インタラクティブプロジェクター、及び、インタラクティブプロジェクターの制御方法 |
| JP2016225865A (ja) * | 2015-06-01 | 2016-12-28 | 東芝アルパイン・オートモティブテクノロジー株式会社 | 俯瞰画像生成装置 |
| KR101709009B1 (ko) * | 2015-08-18 | 2017-02-21 | 주식회사 와이즈오토모티브 | 어라운드 뷰 왜곡 보정 시스템 및 방법 |
| US20170132476A1 (en) * | 2015-11-08 | 2017-05-11 | Otobrite Electronics Inc. | Vehicle Imaging System |
| EP3293717B1 (en) * | 2016-09-08 | 2021-11-10 | KNORR-BREMSE Systeme für Nutzfahrzeuge GmbH | An electronically controlled braking system |
| DE112017006840B4 (de) * | 2017-01-16 | 2023-11-02 | Fujitsu Limited | Informationsverarbeitungsprogramm, Informationsverarbeitungsverfahren und Informationsverarbeitungsvorrichtung |
| US10482626B2 (en) * | 2018-01-08 | 2019-11-19 | Mediatek Inc. | Around view monitoring systems for vehicle and calibration methods for calibrating image capture devices of an around view monitoring system using the same |
| CN109163707B (zh) | 2018-09-06 | 2019-11-26 | 百度在线网络技术(北京)有限公司 | 障碍物感知方法、系统、计算机设备、计算机存储介质 |
| US11748920B2 (en) * | 2019-08-02 | 2023-09-05 | Nissan Motor Co., Ltd. | Image processing device, and image processing method |
| CN110576796A (zh) * | 2019-08-28 | 2019-12-17 | 浙江合众新能源汽车有限公司 | 一种标清360全景系统ui布局方法 |
| TWI826119B (zh) * | 2022-11-15 | 2023-12-11 | 瑞昱半導體股份有限公司 | 影像處理方法、系統以及非暫態電腦可讀取記錄媒體 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07236113A (ja) * | 1994-02-24 | 1995-09-05 | Canon Inc | 画像記録再生方法および画像記録再生装置 |
| JPH0973545A (ja) * | 1995-09-07 | 1997-03-18 | Fujitsu Ten Ltd | 白線認識装置 |
| JP2001186512A (ja) * | 1999-12-27 | 2001-07-06 | Toshiba Corp | X線検査装置 |
| JP2001236506A (ja) * | 2000-02-22 | 2001-08-31 | Nec Corp | 白線検出方法および白線検出装置 |
| JP2003169323A (ja) | 2001-11-29 | 2003-06-13 | Clarion Co Ltd | 車両周囲監視装置 |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2259220A3 (en) * | 1998-07-31 | 2012-09-26 | Panasonic Corporation | Method and apparatus for displaying image |
| JP4156214B2 (ja) * | 2001-06-13 | 2008-09-24 | 株式会社デンソー | 車両周辺画像処理装置及び記録媒体 |
| ATE311725T1 (de) * | 2001-09-07 | 2005-12-15 | Matsushita Electric Industrial Co Ltd | Vorrichtung zum anzeigen der umgebung eines fahrzeuges und system zur bildbereitstellung |
| JP4744823B2 (ja) * | 2004-08-05 | 2011-08-10 | 株式会社東芝 | 周辺監視装置および俯瞰画像表示方法 |
| JP4639753B2 (ja) * | 2004-10-25 | 2011-02-23 | 日産自動車株式会社 | 運転支援装置 |
| JP4934308B2 (ja) * | 2005-10-17 | 2012-05-16 | 三洋電機株式会社 | 運転支援システム |
| JP4832321B2 (ja) * | 2007-01-26 | 2011-12-07 | 三洋電機株式会社 | カメラ姿勢推定装置、車両、およびカメラ姿勢推定方法 |
-
2007
- 2007-04-25 EP EP07742396.0A patent/EP2018066B1/en active Active
- 2007-04-25 JP JP2008514434A patent/JP4956799B2/ja active Active
- 2007-04-25 US US12/298,837 patent/US8243994B2/en active Active
- 2007-04-25 CN CN2007800166895A patent/CN101438590B/zh active Active
- 2007-04-25 WO PCT/JP2007/058961 patent/WO2007129582A1/ja not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH07236113A (ja) * | 1994-02-24 | 1995-09-05 | Canon Inc | 画像記録再生方法および画像記録再生装置 |
| JPH0973545A (ja) * | 1995-09-07 | 1997-03-18 | Fujitsu Ten Ltd | 白線認識装置 |
| JP2001186512A (ja) * | 1999-12-27 | 2001-07-06 | Toshiba Corp | X線検査装置 |
| JP2001236506A (ja) * | 2000-02-22 | 2001-08-31 | Nec Corp | 白線検出方法および白線検出装置 |
| JP2003169323A (ja) | 2001-11-29 | 2003-06-13 | Clarion Co Ltd | 車両周囲監視装置 |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP2018066A4 |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009188635A (ja) * | 2008-02-05 | 2009-08-20 | Nissan Motor Co Ltd | 車両周辺画像処理装置及び車両周辺状況提示方法 |
| JP2010221980A (ja) * | 2009-03-25 | 2010-10-07 | Aisin Seiki Co Ltd | 車両用周辺監視装置 |
| US8866905B2 (en) | 2009-03-25 | 2014-10-21 | Aisin Seiki Kabushiki Kaisha | Surroundings monitoring device for a vehicle |
| JP2011109286A (ja) * | 2009-11-16 | 2011-06-02 | Alpine Electronics Inc | 車両周辺監視装置および車両周辺監視方法 |
| JP2015119225A (ja) * | 2013-12-16 | 2015-06-25 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
| US10140775B2 (en) | 2013-12-16 | 2018-11-27 | Sony Corporation | Image processing apparatus, image processing method, and program |
| WO2015098107A1 (ja) * | 2013-12-24 | 2015-07-02 | 京セラ株式会社 | 画像処理装置、警報装置、および画像処理方法 |
| JPWO2015098107A1 (ja) * | 2013-12-24 | 2017-03-23 | 京セラ株式会社 | 画像処理装置、警報装置、および画像処理方法 |
| US10102436B2 (en) | 2013-12-24 | 2018-10-16 | Kyocera Corporation | Image processing device, warning device and method for processing image |
| JP2024028606A (ja) * | 2017-08-08 | 2024-03-04 | 住友建機株式会社 | 道路機械 |
| JP7821208B2 (ja) | 2017-08-08 | 2026-02-26 | 住友建機株式会社 | 道路機械 |
| KR102343052B1 (ko) * | 2021-06-17 | 2021-12-24 | 주식회사 인피닉 | 3d 데이터를 기반으로 2d 이미지에서의 객체의 이동 경로를 식별하는 방법 및 이를 실행하기 위하여 기록매체에 기록된 컴퓨터 프로그램 |
Also Published As
| Publication number | Publication date |
|---|---|
| US20090257659A1 (en) | 2009-10-15 |
| EP2018066B1 (en) | 2019-10-02 |
| US8243994B2 (en) | 2012-08-14 |
| EP2018066A4 (en) | 2017-04-19 |
| CN101438590A (zh) | 2009-05-20 |
| CN101438590B (zh) | 2011-07-13 |
| JPWO2007129582A1 (ja) | 2009-09-17 |
| JP4956799B2 (ja) | 2012-06-20 |
| EP2018066A1 (en) | 2009-01-21 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2007129582A1 (ja) | 車両周辺画像提供装置及び車両周辺画像提供方法 | |
| JP5068779B2 (ja) | 車両周囲俯瞰画像表示装置及び方法 | |
| KR101519209B1 (ko) | Avm 영상 제공 장치 및 방법 | |
| EP2437494B1 (en) | Device for monitoring area around vehicle | |
| US9418556B2 (en) | Apparatus and method for displaying a blind spot | |
| JP6699427B2 (ja) | 車両用表示装置および車両用表示方法 | |
| JP2012147149A (ja) | 画像生成装置 | |
| WO2012172923A1 (ja) | 車両周辺監視装置 | |
| JP2012040883A (ja) | 車両周囲画像生成装置 | |
| JP5500392B2 (ja) | 車両周辺監視装置 | |
| KR102177878B1 (ko) | 영상 처리 장치 및 방법 | |
| WO2014054753A1 (ja) | 画像処理装置及び車両前方監視装置 | |
| JP2012136206A (ja) | 駐車制御システム、及び、駐車制御方法 | |
| JPWO2014054752A1 (ja) | 画像処理装置及び車両前方監視装置 | |
| JP2000293693A (ja) | 障害物検出方法および装置 | |
| JP2008102620A (ja) | 画像処理装置 | |
| JPWO2018146997A1 (ja) | 立体物検出装置 | |
| JP5906696B2 (ja) | 車両周辺撮影装置および車両周辺画像の処理方法 | |
| JP5235843B2 (ja) | 車両周辺監視装置および車両周辺監視方法 | |
| JP4070450B2 (ja) | 前方車両認識装置及び認識方法 | |
| JP5891751B2 (ja) | 画像間差分装置および画像間差分方法 | |
| JP2007249814A (ja) | 画像処理装置及び画像処理プログラム | |
| JP6718295B2 (ja) | 周辺監視装置及び周辺監視方法 | |
| JP2017117357A (ja) | 立体物検知装置 | |
| JP5317655B2 (ja) | 車両運転支援装置および車両運転支援方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07742396 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2008514434 Country of ref document: JP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 12298837 Country of ref document: US |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2007742396 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 200780016689.5 Country of ref document: CN |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |