Interesting article in The Sunday Times, about a BMW iX and the driver's claims that whilst cruise control was enabled it accelerated suddenly and she had to drive it up a bank (and then flip) to avoid other road users. The quote from BMW is very telling, saying that responsibility "remains with the driver at all times" when using any ADAS. I am reminded of this from The Ojo-Yoshida Report discussing OEMs putting all responsibility on the driver for their tech. https://ojoyoshidareport.com/bmw-fiasco-failed-testing-verification-validation-of-ai-driven-adas/
@drewpasmith@mpesce The testing regimes themselves are "flawed" (I'm being VERY polite). For 'intelligent cruise control' the system only needs to be accurate for 90% of the test. That's 25kms it can be wrong! Thatcham Research, UK NCAP testers, have admitted the 'lane keeping' tests have been gamed by OEMs just to pass and not be good on the roads. That's just two examples. If only we had a recent example of OEMs gaming tests to the detriment of us all, say over emissions or something.
Obviously I have no idea what the car did or didn't do and if the driver is correct in what happened. But it does sit well with my thoughts on and experience of the current ADAS systems, essentially how it is unfit for purpose and should not be on our roads, let alone being mandated on, as is the case. @drewpasmith talks about this on an excellent episode of Next Billion Seconds with @mpesce and Sally Dominguez. https://nextbillionseconds.com/2023/02/02/the-next-billion-cars-autonomous-vehicles-learning-to-crawl/
@CrackedWindscreen I think that there needs to be a greater recognition that the very system definition of a partial automated driving system is not internally consistent with safety.
It is, in fact, hoisting *more* tasks upon the human driver while the human driver must remain the “safety net” for the system at all times.
So, right out the gate, before even the most robust validation process imaginable… the system definition is fighting against safety.