Interesting observation and questions.Intriguing discussion on how much autonomy people would be willing to give to a war machine. Do we let them fight on our behalf, and how much of the final fire decision does the machine get to make.
I would view that if faced with a choice to put people or machines in harms way, we will select machines (nobody likes the imagery of dead soldiers). Machines can detect and react faster than us, so a machine that can make its own decision, is going to be superior to a machine that needs to wait for a humun authority. If you want our machines to be in the fight, let alone have the chance to win, ultimately the machines will need autonomy.
I also question the benefit of the human in the fire chain. We are a long way from having perfect decision making capabilities. Either because of error, or deliberate malevolence. We worry about a robot mistaking the wrong target, I doubt they could be worse than humans at this. We worry about robots killing civilians. Ukraine shows we are very good at that ourselves.
Australia recently purchased some really nice smart sea mines from RWM Italia. These things sit on the ocean floor listening for the right target and then activate. All by themselves. We already have the first batch. What is the difference between that and a torpedo carrying ghost shark patroling a region by itself, listening for a defined target and then firing. Our choice in this process is to release the machine or not, and the parameters we give it to define an enemy target.
Perhaps we need to invest in IFF system improvements as our mitigation.
Armed autonomous machines are going to be available this decade in numbers. Our enemies will certainly deploy them.
Things are moving very fast is this realm and we will be forced to commit to a certain path.
Not sure what that looks like for the ADF, but as you point out the other guy may very well have a different view on this subject.
Interesting Times.
Cheers S