NZDF General discussion thread

kiwi in exile

Active Member
It’s an interesting point of view. What is the point of view about firing artillery rounds which quite simply explode into anything they are aimed whatever it may be, rather indiscriminately?

How is that more palatable from an ethical point of view from a precision weapon that kills what it is actually intended to kill?
I think there is a lot of misinformed alarmism about armed drones. For hundreds of years we have used artillery to rain fire on map co-ordinates without always seeing where the shells land. Drones based systems usually at least have a wo/man in the loop with eyes on target. As do guided munitions that we already use like javelin, maverick, penguin. I think it's down to an uninformed public and alarmist tone in reporting. Collins could also be better at articulating in this area.

Ai based auto omous systems is a different question. But Collins is clear we would always have a person in the loop.

I think LRASM and other systems like brimstone can use AI to autonomoudly self discriminate or select targets vs non-targets/collateral in crowded environments.
 

Todjaeger

Potstirrer
It’s an interesting point of view. What is the point of view about firing artillery rounds which quite simply explode into anything they are aimed whatever it may be, rather indiscriminately?

How is that more palatable from an ethical point of view from a precision weapon that kills what it is actually intended to kill?
As I understand it, the concern is about whether or not a human makes the decision to engage or not.

If the artillery or MRLS were autonomous and able to make the decision to engage itself, that might be a bit of a closer comparison.

One danger that could potentially occur if a system becomes overly reliant upon AI is that the system could get corrupted (deliberately or accidentally), damaged or just malfunction and that start misidentifying potential targets. Such a misidentification could lead to blue-on-blue fratricides, or it could be a bit of the reverse, in that valid/hostile targets are not engaged because the system makes the decision that whatever the target is, is either a friendly or non-legitimate target. A potential scenario of the second situation could be something like where a hostile force effectively spoofs the AI system into thinking a column of tanks moving into a front or actively engaging are instead ambulances because they have Red Cross or Red Crescent emblems on them. A human in the loop could see the emblems and see the tanks engaging and make the decision to engage because even protected vehicles cannot take hostile actions.

Now for systems like Brimstone, they (Brimstone specifically) have an onboard MMW radar which can scan for potential targets and is designed to be able to discriminate between targets based upon the radar returns, given that different materials will reflect the radar signal differently. However, there would still have been a human in the loop making the decision to fire even with Brimstone having a 20+ km range depending on version, launching platform and other variables.
 

Nighthawk.NZ

Well-Known Member
As I understand it, the concern is about whether or not a human makes the decision to engage or not.

If the artillery or MRLS were autonomous and able to make the decision to engage itself, that might be a bit of a closer comparison.
That's the way I see it... A loitering AI drone see's a target of opportunity, dos it engage or not...
  1. Is it an actual target or just some random vehicle... ie; farmer trying to get home or is it actual targets using a van/car as cover?
  2. Are there any civilians nearby that could get caught up in the collateral damage?
  3. What will the overall collateral damage if any be?
  4. Is that target worth any collateral damage?
 

Rob c

The Bunker Group
Verified Defense Pro
NZDF has been hollowed out and we need to start somewhere.
A quick calculation shows that the defence spending in the 1980's was an average of 2.5% GDP, since then the average is over 1% less and more like 1.5% less, so in today's terms Defence has been deprived of between $140Band $210B in spending power over this period. However the Bolger government who did the first cuts was increasing expenditure and had a stated goal to do this. Then Hellen put the boot in and Key continued the downward trend. So yes there is a mountain to climb, but are we doing too little too late? For reference if things go pear shaped the average for WW2 for NZ was 34% GDP.
My belief is that a good deterrence means that in the end you save lives and money.
 
I don't necessarily see a requirement for 16 Field Regiment to be renamed. However there is a requirement for them to move away from towed artillery to SPH, MLRS, mobile GBAD, and maybe a mobile AShM capability. I would think that three batteries of each would suffice. Again I point to the lessons from the Russo-Ukrainian War.
I have seen repeated reports from analysts and people on the ground in Ukraine that towed artillery is often more survivable than SPHs (apologies I can't link the sources as they are mostly tweets I have read over the past year). They argue that because drones are the principal means of recon in Ukraine you are most likely to be spotted when moving. This reduces the effectiveness of shoot and scoot, SPH's main advantage, because it makes them more likely to be spotted. On the other hand, towed guns are able to fire from concealed and protected positions that makes them harder to spot and harder to destroy with FPVs or loitering munitions. This dynamic might be specific to Ukrainian battlefields, but I think it still warrants consideration. Certainly, I don't think it is clear that SPHs should replaced towed guns entirely in NZDF service.
 

Challenger

New Member
WRT to the Information domain - I wonder if we will see some of the enhanced cyber and intelligence gathering capabilities grouped together in a ISTAR style battalion, bringing together, developing and expanding EW, Intelligence, Cyber, SIGNIT, Drones and Target Acquisition at individually company level strength.

Would be a useful asset in the grey zone and at war, and again a force multiplier. Also can attract a different type of individual than other corps.

This seems a logical step forward and prudent, and politically acceptable.
 

ADMk2

Just a bloke
Staff member
Verified Defense Pro
As I understand it, the concern is about whether or not a human makes the decision to engage or not.

If the artillery or MRLS were autonomous and able to make the decision to engage itself, that might be a bit of a closer comparison.

One danger that could potentially occur if a system becomes overly reliant upon AI is that the system could get corrupted (deliberately or accidentally), damaged or just malfunction and that start misidentifying potential targets. Such a misidentification could lead to blue-on-blue fratricides, or it could be a bit of the reverse, in that valid/hostile targets are not engaged because the system makes the decision that whatever the target is, is either a friendly or non-legitimate target. A potential scenario of the second situation could be something like where a hostile force effectively spoofs the AI system into thinking a column of tanks moving into a front or actively engaging are instead ambulances because they have Red Cross or Red Crescent emblems on them. A human in the loop could see the emblems and see the tanks engaging and make the decision to engage because even protected vehicles cannot take hostile actions.

Now for systems like Brimstone, they (Brimstone specifically) have an onboard MMW radar which can scan for potential targets and is designed to be able to discriminate between targets based upon the radar returns, given that different materials will reflect the radar signal differently. However, there would still have been a human in the loop making the decision to fire even with Brimstone having a 20+ km range depending on version, launching platform and other variables.
Sure, but there are no systems that are "completely" autonomous even the advertised as such ones require human target designation, mobility, set up and launch and they retain "human in the loop" for terminal engagement. For example:

SkyStriker Tactical Loitering Munitions


During the mission, the munition may operate without direct human control sure, but how is that any different to the Brimstone MMW radar guided munition you mentioned or indeed any "fire and forget" weapon? Sure there may be a wave off capability with Brimstone (and with most new generation advanced munitions nowadays) and there are two way datalinks to afford such capabilities built in to even the most advanced Loitering Munitions (Switchblade 600 etc) so I am not seeing the difference?

A processer may malfunction or a data-link may cease working during flight but so might it for any guided munition. The datalink that provides the abort capability for a Javelin missile is not fool proof. I am not sure what a malfunctioning munition might hypothetically do is an ethical concern that is greater than the ethics of the use of any other potentially lethal military system or platform. It is a risk certainly, but again, so is firing unguided ammunition natures and 99% of all NZDF ammunition natures are unguided...

I think the ethical concerns about a Terminator-esque hunter/killer system completely uncontrollable via human intervention carry a fair bit of misunderstanding about the capabilities of these systems and let's not forget, directly human targeted and controlled Hellfire (etc) strikes are hardly infallible...
 
Last edited:

seaspear

Well-Known Member
Would not any booby trap munition come under a definition of "without human control" area denial weapon systems that don't have a human interaction have been around a long time ,there have been discussions even of biological weapons targeting certain human genetic markers which so far have not shown to have substance
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
It’s an interesting point of view. What is the point of view about firing artillery rounds which quite simply explode into anything they are aimed whatever it may be, rather indiscriminately?

How is that more palatable from an ethical point of view from a precision weapon that kills what it is actually intended to kill?
The difference about a PGM or any other weapon is that a human makes the decision to use it. I don't believe that the ethics of allowing AI or any machine to unilaterally make the decision of whether or not to kill humans been discussed or explored enough. The question is also around the morality of it. Some questions:
  • Is it right to allow AI or any machine to unilaterally make the decision of whether or not to kill humans?
    • What are the morals related to this?
  • What happens if AI or any machines becomes self aware?
    • Is it possible for this happens?
    • If it happens, will the AI or machine develop a consciousness?
    • Who from, where from, and how will it develop its morality? Some argue that humans might not be the best teachers of morality.
    • If AI or any machines achieve self awareness and consciousness, will humans treat them as self aware individuals or enslave them? After all humans have a history of enslaving others.
      • If enslaved would the AI or machines recognise it as such?
  • If AI or any machines achieve self awareness and consciousness, how much further advanced than humans will they become?
    • How much of a threat to humanity does this present.
    • How could we defend against such a threat?
There is much to consider and it is something that we do not want to rush into. Unfortunately given human history some of us will rush into without fully considering the consequences.
 

ADMk2

Just a bloke
Staff member
Verified Defense Pro
The difference about a PGM or any other weapon is that a human makes the decision to use it. I don't believe that the ethics of allowing AI or any machine to unilaterally make the decision of whether or not to kill humans been discussed or explored enough. The question is also around the morality of it. Some questions:
  • Is it right to allow AI or any machine to unilaterally make the decision of whether or not to kill humans?
    • What are the morals related to this?
  • What happens if AI or any machines becomes self aware?
    • Is it possible for this happens?
    • If it happens, will the AI or machine develop a consciousness?
    • Who from, where from, and how will it develop its morality? Some argue that humans might not be the best teachers of morality.
    • If AI or any machines achieve self awareness and consciousness, will humans treat them as self aware individuals or enslave them? After all humans have a history of enslaving others.
      • If enslaved would the AI or machines recognise it as such?
  • If AI or any machines achieve self awareness and consciousness, how much further advanced than humans will they become?
    • How much of a threat to humanity does this present.
    • How could we defend against such a threat?
There is much to consider and it is something that we do not want to rush into. Unfortunately given human history some of us will rush into without fully considering the consequences.

But that is my point above. No-one is making such machines... Whether someone could and whether such should be deployed are besides the point.

No-one is making machines that self-deploy, handle all aspects of deployment and launch, independently select their own targets and are in effect unaccountable to any person.

They are autonomous in the way that auto-pilots are autonomous and I don't see much changing in that regard.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
Sea guardian is better for us than Triton for its ability to deploy munitions and sonobouys. There is also a rough field STOL version wing/landing gear conversion kit available based on the Mojave mq9 that is c130 transportable and could be forward deployed to the Pacific.

C-uas/c-ram and shorad could all be performed by the same platform ie skyranger 30/35 that can be mounted on the back of a MHOV. And on the boxer.
Yes definitely but we would have to ensure easy and quick access to the ammunition. The logistics of supply from Europe could be problematic in wartime for a variety of reasons. Long logistics lines can be a vulnerability.

Boxer or any other armoured LAV replacement and 155mm SPGs are a lower priority for me as they only make sense as an expeditionary asset. That entails we would have to fly or ship them to a conflict in the pacific We cannot fly them and I don't really see Canterbury doing combat related stuff. Would such a heavy armoured vehicle be the right tool for the SW Pacific?
But that is exactly the kind of war that the army will be required to fight, whether we like it or not. Generally we don't get to pick the wars we are invited to fight, and we don't get to automatically choose any particular type of battlefield because the enemy always has a say. We have to prepare for what we most likely will have to face. The Russo-Ukrainian War has shown the vulnerabilities of towed artillery.
think we need to focus on maritime awareness and deterrence 1st ie frigates, mpa's, antiship missiles, etc. and then rethink what shape and role our land forces should serve, rather than simply platform x is old we need to replace it.
We actually need to have an overview of our CONOPS and what we want and need to achieve. What is our overall strategy? Once we have decided that, then we must determine how we are going to do it. We cannot afford to silo this into service centric silos; we have to consider all aspects defence wide. We need to start thinking about this now.
The role we want them to perform has to fit with their size and where we are likely to deploy them. Are we likely to deploy and sustain a large force of armoured infantry combined arms in our region or further a field? Would that be our best means of contributing to a coalition? Why gear our land force structure in that way. I'm not against lethality in land based systems for the nzdf. I just think "legacy" style platforms like armoured vehicles and big artillery are heavy, expensive and might not be the best tools for the next missions.
During WW2 both the allies and Japan used armour in the SW Pacific. The NZ Army 3rd Division deployed Matilda II infantry tanks. The Australian Army deployed tanks as well. "Legacy" style platforms aren't going to disappear from the future battlefield. They may, and will, alter in form and capability but they won't disappear. AFVs and self propelled artillery are still essential mm and will be for the foreseeable future. For the next war we have to ensure that the army has good mobile GBAD including EW and defence against drones.

We have the geographical capacity, just like Australia, for defence in depth because both countries are island nations. You are correct about our maritime domain and I would add the space domain. We do require the ISR capabilities to detect and monitor threats from afar, and the capabilities to defend against them if they are hostile. Maybe we should buy some USAF B-1B and fill them up with LRASM :) As if ever.
 
Last edited:

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
My belief is that a good deterrence means that in the end you save lives and money.
Deterrence is the best first defence. We actually don't have any deterrence unfortunately because NZDF is missing much of its teeth. IF we ever had to face a hostile invasion force (maybe Tasmanians :) ) we probably won't be able to defeat it, BUT good deterrence tells a potential enemy that they may be victorious but that victory will come at a high price. It is for them to decide whether or not they are willing to pay that price.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
Would not any booby trap munition come under a definition of "without human control" area denial weapon systems that don't have a human interaction have been around a long time ,there have been discussions even of biological weapons targeting certain human genetic markers which so far have not shown to have substance
I don't think so because the booby trap has to be set and armed by humans. Interesting philosophical question though. Same would apply to both land mines and sea mines.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
But that is my point above. No-one is making such machines... Whether someone could and whether such should be deployed are besides the point.

No-one is making machines that self-deploy, handle all aspects of deployment and launch, independently select their own targets and are in effect unaccountable to any person.

They are autonomous in the way that auto-pilots are autonomous and I don't see much changing in that regard.
AI is progressing quite quickly, almost exponentially, and now there are machine learning capabilities that far exceed anything that humans are capable of. It's not if but when.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
I have seen repeated reports from analysts and people on the ground in Ukraine that towed artillery is often more survivable than SPHs (apologies I can't link the sources as they are mostly tweets I have read over the past year). They argue that because drones are the principal means of recon in Ukraine you are most likely to be spotted when moving. This reduces the effectiveness of shoot and scoot, SPH's main advantage, because it makes them more likely to be spotted. On the other hand, towed guns are able to fire from concealed and protected positions that makes them harder to spot and harder to destroy with FPVs or loitering munitions. This dynamic might be specific to Ukrainian battlefields, but I think it still warrants consideration. Certainly, I don't think it is clear that SPHs should replaced towed guns entirely in NZDF service.
Towed artillery takes time and people to deploy, set up, fire, pack up then move. After its first rounds land, or are detected in flight, counter battery fire will destroy it before it can move. Also and very importantly the crew are out in the open with no cover. Whereas self propelled artillery can deploy, stop, fire, and move before its first rounds have landed. More importantly in the likes of the Boxer 155mm SPH module, and the BAE Archer 155mm SPH, the systems are automated with the crew are under armoured cover the whole time. They have no requirement to leave the vehicle. The French 155mm Ceaser SPH is truck mounted and requires the crew to service the gun in the open.

Yes drones in Ukraine are taking a toll of artillery, whether it be self propelled or towed, but the crew fatalities amongst the self propelled artillery is significant less than that of towed artillery. Weapons you can easily replace, but trained, combat experienced crews aren't as easily or quickly replaced.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
What do people think about NZ aquiring an over the horizon radar capability; say the Australian Jindalee system? If so:
  • Where would we position it?
  • Could we position it in the Realm territories?
    • That would give good distance between the transmitters and receivers.
Another long range sensor, what about a SOSUS style system?
 

Todjaeger

Potstirrer
What do people think about NZ aquiring an over the horizon radar capability; say the Australian Jindalee system? If so:
  • Where would we position it?
  • Could we position it in the Realm territories?
    • That would give good distance between the transmitters and receivers.
Another long range sensor, what about a SOSUS style system?
One very real question is whether NZ could actually get an OTHR array like JORN. Given some of the magnetic anomalies, and especially given the levels of volcanic and seismic activity.

If NZ can get a functional system which provides a broad area surveillance capability that could be very good, even if such a system does not provide precise information. The NZDF, by having increased domain awareness, could make more effective and efficient use of patrol, surveillance and response assets.

Failing that, networks of tower-mounted radar 'pickets' might be a good idea. No idea how tall radio towers in NZ are, but some of the tallest ones in the US are over 600m AGL. Failing that, lower height towers installed at stable coastal high points could position radars at comparable unobstructed elevations above sea level, potentially providing direct radar horizons of 100+ km.
 

Alberto32

Member
One very real question is whether NZ could actually get an OTHR array like JORN. Given some of the magnetic anomalies, and especially given the levels of volcanic and seismic activity.

If NZ can get a functional system which provides a broad area surveillance capability that could be very good, even if such a system does not provide precise information. The NZDF, by having increased domain awareness, could make more effective and efficient use of patrol, surveillance and response assets.

Failing that, networks of tower-mounted radar 'pickets' might be a good idea. No idea how tall radio towers in NZ are, but some of the tallest ones in the US are over 600m AGL. Failing that, lower height towers installed at stable coastal high points could position radars at comparable unobstructed elevations above sea level, potentially providing direct radar horizons of 100+ km.
I think that I recall asking about this in this forum, and was soundly rejected. Now it seems to be that it's OK.
 
Top