- Home
- News and Events
- Commentary
- More features, less problems? Driver Alertness – T...
More features, less problems? Driver Alertness – TK Holdings (T 509/18)
Decision T 509/18 Driver Alertness – TK Holdings of 3 March 2020 relates to application EP 2688764 A2 (priority date 25.03.2011).


The application relates to a driver monitoring system for a car. The system comprises a camera filming the driver’s face and a computer configured to determine, based on the video feed, whether the driver is alert (focused on the road) or non-alert (e.g. distracted).
Claim 1 according to the third auxiliary request in appeal proceedings reads as follows:
- A driver alertness detection system, comprising:
an imaging unit (110) configured to image an area in a vehicle compartment of a vehicle where a driver’s head is located, and
an image processing unit (1120) configured to receive the image from the imaging unit (1110), and to determine positions of the driver’s head and eyes,
wherein:
the driver alertness detection system is configured to use a classification training process to register the driver’s head position and eye vector for the A-pillars, instrument panel, outside mirrors, rear-view mirror, windshield, passenger floor, center console, radial and climate controls within the vehicle, and configured to save a corresponding matrix of inter-point metrics to be used for a look-up-table classification of the driver’s attention state, the inter-point metrics being geometric relationships between detected control points and comprising a set of vectors connecting any combination of control points including pupils, nostrils and corners of the mouth, whereon the driver alertness detection system further comprises;
a warning unit (1130) configured to determine, based on the determined position of the driver’s head and eyes as output by the image processing unit (1120), whether the driver is in an alert state or a non-alert state, and to output a warning to the driver when the driver is determined to be in the non-alert state,
wherein the image processing unit (1120) determines that the driver is in the non-alert state when the determined position of the driver’s head is determined not to be within a predetermined driver head area region within the vehicle compartment or when the driver’s eyes are determined to be angled to an extent so as not to be viewing an area in front of the vehicle, and, wherein, based on the driver’s attention state according to the look-up-table classification, an appropriate warning is provided to the driver, so that, if the driver is detected to be in an attention partially diverted state, a mild warning is provided to the driver, and, when the driver is detected to be in an attention fully diverted state, a loud warning is provided to the driver.”
The Examining Division refused the application as not new over either of US 6927694 B1 or the article: Ji Qiang et al: “Real-Time Eye, Gaze and Face Pose Tracking for Monitoring Driver Vigilance”, Real-Time Imaging 8, 357 (2002).
The applicant appealed. The Board dismissed the appeal based on the application being not new and not sufficiently disclosed. In particular, some features of the third auxiliary request (underlined above) were not sufficiently disclosed. According to the Board, the application did not disclose:
- how the look-up table classification is to be obtained,
- how it can be based on the matrix, and
- how this can permit to decide on the driver’s attention state.
The Board concluded that the application lacked
- any examples for a matrix as claimed,
- an algorithm suitable for comparing the matrix to the camera images (which requires an instruction on what information is to be derived from the images), and
- an algorithm to which the matrix can be supplied as an input and that can process the matrix to obtain a classification of the attention state of the driver.
The Board concluded that this applies a fortiori to the higher-ranking requests that contain less features.
Comment
It appears odd that the Board decided on the third auxiliary request first and then dismissed the other requests (which contained less features) by stating that they must be a fortiori not disclosed. This seems to be the preferred approach for an inventive step analysis, but not for sufficiency of disclosure. Rather, the Board should have dealt with all requests in the correct order, see the Sec. III.I 2.2 in the Case Law Book. This holds in particular in the case of sufficiency, where it is possible that a claim of a main request is sufficiently disclosed, but additional features in an auxiliary request are perhaps not. That is, more features could cause more problems.
Referring to the Board’s reasoning, one can note that the application did describe a first detection module that localizes the head and determines its rotation, and a second detection module that determines control points and a matrix of inter-point metrics. However, the application lacks an example of such a matrix, so the Board might have a point when concluding that it was not sufficiently disclosed.
The higher-ranking requests did not contain the feature of an inter-point matrix, but they did already contain the feature of a classification training process. This process was described only by the sentence “A classification process is used to register a driver's head position and eye vector at several pre-determined points“ without further elaboration beyond some examples for the points. Therefore, the Board might have refused all requests already based on lack of disclosure of this feature.
It is perhaps interesting to note that the EPO was less applicant-friendly in this case than the USPTO where a parallel application led to grant of US patent US9041789B2. The US examiner had cited the same prior art document US 6927694 B1 (Smith et al.) but did not interpret its disclosure as novelty-destroying. Moreover, the USPTO did not see any enablement problems.
You can find Article 1 of our series here: (https://maucherjenkins.com/sufficiency-of-disclosure-for-ai-inventions/)
Other News