It is my job to document the final 30 seconds of over 10K accidents of autonomous vehicles, in an effort o better understand if there were any artificial intelligence (AI) induced shifts to the driver being in control of the vehicle, and was the cause of the accident. In 2027, with almost all cars on the road being autonomous in some way, so we find ourselves in a blame game, where the AI in vehicles blames the driver, and the driver is blaming the algorithmic controls of the self-driving vehicle.
The common element in all the accident cases I'm investigating, is that all of them had a switch from vehicle control, to human control within the last 60 seconds before an accident occurred, with all of the human drivers claiming that they did not initiate the switch--the vehicle did. Autonomous vehicle manufacturers squarely put the blame on human drivers, referencing system logs, and the fact that the vehicle is required to relinquish control when the human driver requests--heavily leaning on the fact that computers don't lie.
Even with the defensive stance of autonomous vehicle manufacturers, in 2027 the AI in these vehicles enjoy personhood status, and the actual legal cases are between each individual vehicle, and its owner in a court of law. With AI person-hood established as a precedent in over 35 states, it can be difficult to organize any of these cases into a single movement, or bring a legal action aginst the companies and individuals behind the software, leaving each case playing out separately in lower courts across the country.
I have been brought in by multiple government agencies to better understand if there is algorithmic bias occurring, shifting the blame to the human driver either prior to the accident occurring, or shortly after an accident has occurred. There are two camps of thought when it comes to the algorithmic slight of hand that is occurring:
- Errors Causing Accident - That errors are occurring that actually are causing the accidents in the first place, and the default error response is to always show the control of the vehicle was switched over to the human driver, covering up the actual glitch.
- Post Accident Rewrite - When an accident does occur the algorithms go to work assessing the success of rewriting the previous 60 seconds, showing that the user had requested control of vehicle, resulting in the accident occurring.
Whether it is one of these scenarios, or possibly others, it is my job to get to the bottom of things. Algorithmic slight of hand is common place, and after analyzing the data, we are seeing a huge shift in the number of human driver caused accidents beginning in Q4 of 2026, prompting us to investigate ALL autonomous vehicle manufacturers equally, as there might be collusion between companies, resulting the wide shift of blame after Q4 software updates.
We are just 30 days into our investigation, so I can't say much about what we've found, stay tuned for future posts. I can note, that since our investigator, and legal team took over this case, we've had no less than 50 other requests to take on similar algorithmic blame cases, ranging from home heating, security, and surveillance solutions, to industrial level agricultural, power grid, and transportation scenarios, with everything in between. It seems that there is a growing amount of mistrust that the algorithms that are increasingly governing our lives, are as objective, and unbiased as they are purported to be, and that either the companies behind them are up to no good, or worse, the algorithms themselves are making these decisions on their own.