I pretty much agree with you. I just think that acceptance of automation in non-life critical situations will come easier and probably first.
It's also true that we already depend on many life-critical software systems, in aviation and medical devices notably. In a rational world all that one would need to show is that statistically a self-driving car is safer than a human driver. That bar is probably not that hard to achieve. I'm just not sure how to predict how the general public will react to automated systems that can and will kill people in rare circumstances and whose correctness can at best only be defined in probabilistic terms.
It's also true that we already depend on many life-critical software systems, in aviation and medical devices notably. In a rational world all that one would need to show is that statistically a self-driving car is safer than a human driver. That bar is probably not that hard to achieve. I'm just not sure how to predict how the general public will react to automated systems that can and will kill people in rare circumstances and whose correctness can at best only be defined in probabilistic terms.