My students always love seeing big dog in action - it raises all sorts of questions, such as if it is morally permissible to do things to big dog that you can't to a human being, and who is responsible if a platform like big dog is loaded with say a machine gun with facial recognition software, and then accidentally kills civilians. Is it the designer, the engineer, the software programmer, or the officer that ordered big dog into the particular situation the person that is responsible?
Another area that is getting in the news lately in the field of new military technology is the 'self-aiming' rifle. TrackingPoint in Austin, Texas has the Precision Guided Firearm (PGF) which is available to buy from USD 9,950.00. All that is required is to "paint" the target through the rifle scope using a type of "lock and launch" technology. After the operator has pulled the trigger, the gun itself will use complex algorithms to take into account things such as wind, arm shake, recoil, air temperature, and even the bullet dropping due to gravity. When the conditions are ideal, the gun will fire the bullet at the target. The level of training required to operate a PGF is much less than that of a sniper, opening up the use of such a powerful long range weapon to a wider group of people - including hunters, but also possible terrorists and insurgents.
One ethical concern the PGF raises is what happens if the operator changes their mind regarding the target, after it has been painted, but before the bullet is fired. Another very concerning development with this gun is the fact that it also has wifi capability, allowing others to watch remotely what is being seen through the scope of the gun - raising the possibility of a real time perversion known as "war porn". More worryingly however, the gun is able to be accessed using wifi and a smart phone - making the use of this weapon from a distance much more likely, with the operator not having the full picture of the battlespace, that an on the ground sniper would have. It also raises the distinct possibility of the gun being hacked, and used on inappropriate targets - be they friendly forces or non-combatants.
UAV technology has been improving exponentially in the last few years, with drones being used in Afghanistan, Pakistan, Yemen and Somalia. Whilst much has been written about the larger Reaper and Predator drones, a new area of research is in the area of UAV's working together in swarms, without human interaction. Whilst the research has been on nano UAV's, this research will eventually be scaled up to the point where we may have squadrons of Predator and Reaper drones operating as a swarm.
The use of non-lethal (or less than lethal) weapons by police forces around the world has been seen as an attempt to decrease the number of fatal shootings in the community by police officers, who did not have other force options available to them. In the military setting this may pose options for military personnel, particularly those in peacekeeping roles within a hostile community. Stephen Coleman has identified that the main problems regarding non-lethal weapon use by military personnel seem to fit into four broad areas :-
1) the use of a weapon in warfare, whether it is lethal or non-lethal needs to be guided by the principle of discrimination (in order to uphold the principle of non-combatant protection due to their innocence). Because a weapon is less than lethal the temptation to not be as concerned about discrimination and the targeting of civilians, who would normally not be combatants, is problematic.
2) the potential over use by military personnel, simply because it is easier than negotiating with hostile local populations (this has been shown to occur in the policing field)
3) the use of non-lethal weapons in torture and interrogation
4) the use of non-lethal weapons as a force multiplier, rather than a force decreasing tool. An example of this was the Moscow Theatre Siege where an anasthetic gas was pumped in to the theatre, rendering most of the terrorists and hostages unconscious. Russian troops then entered the theatre and instead of arresting the terrorists (as criminals), or detaining them as prisoners of war, they killed them, whilst they were unconscious.
Animals and New Military Technology
Lethal Autonomous Weapons Systems
The Future of Life Institute has published an Open Letter calling for the banning of Autonomous Weapons Systems.
Here is the text of that open letter....
Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
You can view the list of signatories here http://futureoflife.org/AI/open_letter_autonomous_weapons (including Stephen Hawking, Skype founder Jaan Tallinn and Apple Co-Founder Steve Wozniak).
Several other organisations are also calling for the banning of autonomous weapons systems, such as the Campaign to Stop Killer Robots, the International Committee for Robot Arms Control, and Article 36.
Whilst I share a lot of their concerns, I suspect that at this point lethal autonomous weapons systems are inevitable, and the debate over whether they should be developed should have happened at this level a long time ago, and not on the eve of their deployment into the field. The discussion around LAWS, whilst it is vital for the future of warfare, also seems to ignore the elephant in the room - that is the use of unmanned aerial vehicles (also sometimes called drones) mainly by the USA in a wide variety of lethal situations, which has been outside of the normal theatre of war (usually in the name of the war on terror) and can only be described as assassinations or targeted killings. Whilst the automation of weapons systems does raise unique issues, it seems that we need to be getting right the issues regarding unmanned but not fully autonomous weapons systems first. The people of Pakistan do not care that the drones flying overhead terrorising their children are manned or operating autonomously - the effect for them is the same.
In order to more fully understand the issues raised on these emerging technologies it is worth looking at the work of respected ethicist Pat Lin, who was invited to speak at the UN deliberations on LAWS at the five day meeting in Geneva in April 2015 on the Convention for Certain Weapons. A copy of Pat's presentation "The right to life and the Marten's clause" is available online to read as well as the presentations of others to the meeting. The article "Do Killer Robots Violate Human Rights" that Pat wrote for The Atlantic about these discussions and the issues that they raised is very interesting reading.