Research

Researching in autonomous and advanced driver assistance systems, we want to make a considerable contribution to the development of the future’s accident-free, efficient and environmentally compliant vehicles. It’s our goal to establish intelligent driver assistance and car controlling technologies, right up to universally usable autonomous functions, which can be easily plugged in a modern, modular transportation system.

Find out more about our current research topics, publications and open thesis projects.

1. Computer Vision

We develop algorithms that allow our cars to interpret their environment using video cameras. Our autonomous cars will conduct visual perceptive and cognitive activities, such as the recognition of pedestrians, the estimation of their own velocity and position, or the measurement of their relative distance to static and dynamic objects.

Camera Calibration

Camera calibration algorithms estimate the camera parameters (focal length, principal point and relative position) used by the camera to project world objects into images. We developed an online method that computes the camera position relative to the ground using the optical flow in consecutive video frames. We assume that the camera is mounted in the car which moves forward and sideways on a flat area to acquire the images used for the calibration.

Smart Cameras

Smart cameras run optimal computer vision algorithms in programmable devices before the images are delivered to our main computer. They are inspired by the visual processing system of the human eye that conducts basic image processing in the retina. Our architecture bases on implementations in FPGA to avoid the bottlenecks in some algorithms that require low or high level computationally intensive methods, such as image scaling or depth and optical flow estimation.

Stereo Vision

Our stereo camera system estimates three-dimensional information from two or more views of environment. The system resembles biological stereopsis that perceives the sensation of depth from the projection of the world into two retinas. We use a stereo system to estimate the location of objects and their distances in the environment. These measurements can be applied to the analysis of traffic situations or to three-dimensional mapping.

Object Recognition

We are developing and applying pattern recognition methods for detection of static and dynamic objects in the environment. These recognition systems are a necessary component in driver assistance systems and autonomous vehicles.

Lane detection is maybe one of the best known recognition methods of static objects used in the automotive industry. We extended lane detection systems to statistical models that we use to improve our cars’ GPS self-localization estimation.

We are developing several modules that recognize in real time more complex dynamic objects in image sequences. We trained an optimal car classifier that uses the camera parameters to compute reliably the relative distance of our autonomous vehicle to other cars. We are extending these classification models to detect and track persons and to interpret their intentions to avoid harmful situations.

Our cars can detect and interpret traffic lights. We are currently extending these classifiers to detect another traffic signs and moving objects. Thus, our cars will interpret complex traffic situations and design optimal self-driving strategies.

2. Cognitive Navigation

We develop the technology for route planning and navigation in road traffic. The information from a digital road map must be added to data that is constantly being received from environmental sensors (GPS, cameras, laser sensors). The combined data enables the vehicle to proceed from point to point on a collision free path. The vehicle can maneuver through road traffic and effectively imitate human behavior.

We develop the software for the controller and the behavior of the autonomous car. This includes the algorithms for path planning, intersection recognition, obstacle avoidance, steer control and speed regulation. For navigation, the car has to use the localization data from the GPS receiver, correct it by data from the cameras and the laser scanners and apply it to map data, stored as a digital road network definition file. Using this combined data the vehicle can proceed from point to point on a collision free path. The goal is to develop a car that is capable of maneuvering in road traffic and reacting reliably to other road users and obstacles.

Controller

The controller contains all aspects of the engineering technology that is necessary to operate the car in an ever changing environment. Steering, acceleration, braking, parking must all be controlled in order to comply with the restrictions of the planned route. In order to produce as close as possible, a ‘human like’ driving experience, the controller will use techniques of machine learning in order to ‘learn’ and duplicate human driving actions and reactions.

Digital Road Map

The digital representation of the road network is based on a RNDF (Route Network Definition File). The RNDF contains information about the locations of all checkpoints, highways, roads, crossways, lanes. With this data a logical route can be planned.

Maneuver Planning

The logical route taken by the car has to be adjusted continuously to the motion model of the car. This includes the car’s maneuvers like braking, turning, passing. The route also must be adjusted to the street regulations, i.e., to speed limits, traffic lights, stop signs.

Simulation Environment

Real car tests are costly and time consuming. Therefore, many driving functions are first tested in our simulated environment. A kinematic model of the car is deployed which simulates the car’s behavior in a virtual reality environment.

3. 3D

We are concerned with the spatial capturing of the car’s direct environment. Therefore, we use LIDAR- and RADAR-technologies (considering signal delay) for distance- or speed-measurement. We also use laser scanning, where a laser beam is omnidirectionally distributed by a rotating mirror. Thus rather than selective capture a complete solid map can be created – it’s like a virtual tactile sense.

As a true and instant detection of our environment is crucial for our car, we use spatial sensors. The installation of these sensors on the car, the analysis of the sensor data and the integration of the results in our software framework are developed.
The main challenge is that our algorithms must perform in real-time, and as well must be 100% reliable. If the object detection fails for just a second, this may cause a crash of the car. So we maintain a 3D map of our environment, which is updated every 40 milliseconds and uses the input of several independent sensors.

LIDAR

LIDAR (Light Detection And Ranging) is a method for measuring distance by light. A beam of infrared laser light (not harmful to humans) is emitted, and may be reflected by an object and return to the sender, while its time of flight is measured. So by knowing the speed of light, the object’s distance to the reflector can be computed.
Most LIDARs work as scanners, where the laser beam is redirected by a rotating mirror receiving unit, or the electronics itself rotate. As they are active sensors and use a wavelength which is not contained in the sunlight, they work independently of the lighting conditions, at night just as well as in bright daylight. LIDARs are excellent in determining the position and shape of an object, but as a relatively new technology it is  rarely used in the automotive industry so far. The following LIDARs are mounted on our prototype cars: The Velodyne (TM) laser scanner, which we use for localization and obstacle detection. It delivers 1.6 million 3D points per second. The IBEO Lux (TM) laser scanner system consists of 6 individual sensors and a fusion box. It looks 200m around the vehicle. The IBEO Alasca (TM) laser scanner looks for obstacles and is looking ahead up to 200m. The SICK LMS (TM) laser scanner looks at the street curb and detects edges and lane markings.

RADAR

RADAR (Radio Detection And Ranging) uses echoes of electromagnetic waves of the radio spectrum. Unlike LIDAR, accurate positioning is not possible, because radio waves don’t spread straight. The object’s speed can be determined using the Doppler effect. This method was used as early as in the World War II in order to control missiles. In the automotive industry, RADARs are commonly used for an automatic cruise control, for lane change assistants and as emergency brake system. In our car, the ‘MadeInGermany’, we have integrated the following kinds of RADARs: The SMS (TM) radar is a short-range radar and operates at 24GHz. We merge the obstacles with the ones detected by the Lux LIDARs. The Hella (TM) radar system was originally  developed for the VW Phaeton and its special use is to observe the neighbor lanes. The TRW (TM) is our radar sensor with the longest range and it reliably detects other vehicles. It is crucial for fast driving on highways.

Obstacle Detection

The main task of the 3D sensors is the obstacle detection. Based on a high-precision map, we extract the geometrical road surfaces and match them with our sensor input. This “area of interest” is monitored in order to detect dynamic or static obstacles. The sequence of data processing is straightforward: Initially, the raw data is filtered and clusters are aggregated. Then all clusters are examined and features (accessible areas, obstacles, road markings) are extracted. Obstacles are tracked and updated with new input data in any time frame, and their movement is predicted by a Kalman filter. Even so-called „ghosts“ are filtered here. From the list of tracked obstacles the correct behavior is derived.

Lane Detection and Localization

Besides the spatial information, also the intensity of the reflected wave is provided by the laser scanners. The data is comparable to brightness values of a grayscale image, but in the infrared spectrum used by the sensor’s emitter. In contrast to a camera, LIDAR images are independent from the lighting conditions as LIDARs are active sensors. On the other hand, cameras have a higher resolution and can be purchased at lower prices. The intensity information is used to detect road markings. Based on this information we can determine our lateral offset and correct the position data provided by our GPS System. Further, an intensity map of the road surface and the surrounding is generated, and features as gullies, asphalt cracks and so on are extracted. When traveling the same road again, the map features are matched with the live data input, and this way the position of the car can be determined. Also intensity histograms, and 3D features (vertical edges) are tracked and stored in a map in order to enhance the localization reliability. The long term aim is to enable a positioning method independent from any GPS data.