Technology

No discussion on the future of technology in the military can start without looking at Big Dog from Boston Dynamics.

 

My students always love seeing big dog in action - it raises all sorts of questions, such as if it is morally permissible to do things to big dog that you can't to a human being, and who is responsible if a platform like big dog is loaded with say a machine gun with facial recognition software, and then accidentally kills civilians. Is it the designer, the engineer, the software programmer, or the officer that ordered big dog into the particular situation the person that is responsible?


Big Dog being field tested in Hawaii during the RIMPAC exercise in July 2014


LS3 is a cousin of Big Dog - here in follow tight mode (watching it navigate around the trees is interesting)

 Here are some other Boston Dynamics projects.....


Wild Cat


Cheetah Robot


Sand Flea

Australian Company Marathon Targets in conjunction with the Australian Department of Defence has developed a robotic autonomous moving target, which "enables soldiers to train the way they fight against unpredictable moving tragets, with live ammunition".   According to a Sydney Morning Herald article the use of these targets "helped improve the accuracy of US Marine Corps shooters by 104% after two days practice".

http://www.marathon-targets.com/ - Marathon Targets Autonomous Target Robots

Whilst robotic targets to assist with training seems largely benign, it would not take much modifcation to weaponise such robots - it is the autonomous nature of the robots that is concerning. Even when there is a peson "in the loop" it is just one person in charge of a group of robots, not one handler per robot.  I also have concerns regarding who is responsible if something goes wrong - is it the programmer, the engineer, the handler, or the tragetting officer that bears the responsibility if the wrong person is killed.  A more serious concern is that of the possiblity of the robots being hacked - because of the networked nature of the robots they pose a potential security compromise, where robots may be used against their own armies after being hacked by opposing forces.

Marathon Targets - capability overview


Another area that is getting in the news lately in the field of new military technology is the 'self-aiming' rifle.   TrackingPoint in Austin, Texas has the Precision Guided Firearm (PGF) which is available to buy from USD 9,950.00.  All that is required is to "paint" the target through the rifle scope using a type of "lock and launch" technology.  After the operator has pulled the trigger, the gun itself will use complex algorithms to take into account things such as wind, arm shake, recoil, air temperature, and even the bullet dropping due to gravity.  When the conditions are ideal, the gun will fire the bullet at the target. The level of training required to operate a PGF is much less than that of a sniper, opening up the use of such a powerful long range weapon to a wider group of people - including hunters, but also possible terrorists and insurgents.

One ethical concern the PGF raises is what happens if the operator changes their mind regarding the target, after it has been painted, but before the bullet is fired.  Another very concerning development with this gun is the fact that it also has wifi capability, allowing others to watch remotely what is being seen through the scope of the gun - raising the possibility of a real time perversion known as "war porn".  More worryingly however, the gun is able to be accessed using wifi and a smart phone - making the use of this weapon from a distance much more likely, with the operator not having the full picture of the battlespace, that an on the ground sniper would have.  It also raises the distinct possibility of the gun being hacked, and used on inappropriate targets - be they friendly forces or non-combatants.


Precision Guided Firearms

UPDATE : (August 2015)

http://www.wired.com/2015/07/hackers-can-disable-sniper-rifleor-change-target/

Security researchers Runa Sandvik and Michael Auger were able to successfully hack into the TrackingPoint PGF’s computer and change the target, without the gun user being aware.  They also managed to hack the gun so that it couldn’t be fired by disabling the firing pin, and made changes so that the gun would fire wildly off target. Auger and Sandvik also found that they could load malware software on to the gun, so that changes were not initially obvious until certain times or locations are realised.


Happily, it was not easy for Sandvik and Auger to hack into the gun – it took them a year and they eventually had to destroy one of the guns by completely pulling it apart in order to work out how to hack in to it. However, now that the work is done, it can be repeated on other copies of the same gun.  What is to stop a well funded military who has a lot of hacking expertise (yes, I’m looking at you China and Russia) from purchasing a gun and doing the same thing?


Testing of the EXACTO ordinance in 2014

DARPA in conjunction with Teledyne Scientific & Imaging have been doing extensive research on guidance ordinance, in the form of guided small caliber bullets (50-caliber).  The Extreme Accuracy Tasked Ordinance (EXACTO) program is researching ways to increase sniper accuracy, through the combination of a maneuverable bullet with a real time guidance system, which allows the bullet to change its flight path to take into account any factors that may push it off course.
UAV technology has been improving exponentially in the last few years, with drones being used in Afghanistan, Pakistan, Yemen and Somalia.  Whilst much has been written about the larger Reaper and Predator drones, a new area of research is in the area of UAV's working together in swarms, without human interaction.  Whilst the research has been on nano UAV's, this research will eventually be scaled up to the point where we may have squadrons of Predator and Reaper drones operating as a swarm.


Nano Quadrotor UAV Swarm


Chinese engineer Zhang Bingyan from the Chinese Commission of Science, Technology and Industry for National Defence unveiled a brand new type of drone at the Tianjin International UAV exhibition in August, 2014. The unique aspect of the Sf-1 design is that it is inflatable.  Despite being filled with nothing but air and weighing only 90kg, it is able to reach speeds of 190km/h (120m/h) and is surprisingly manoeuvrable.  Whilst it looks like a large Li-lo in the sky, it will be interesting to see how such technology is used in the area of reconnaissance.  Video of a test flight for the Sf-1 can be seen here - http://www.dailymail.co.uk/sciencetech/article-2745157/Now-s-AIRplane-Homemade-inflatable-drone-reaches-speeds-120mph.html#v-3769493192001 

Photo credit : Daily Mail UK



Non-Lethal Weapons
The use of non-lethal (or less than lethal) weapons by police forces around the world has been seen as an attempt to decrease the number of fatal shootings in the community by police officers, who did not have other force options available to them.  In the military setting this may pose options for military personnel, particularly those in peacekeeping roles within a hostile community.  Stephen Coleman has identified that the  main problems regarding non-lethal weapon use by military personnel seem to fit into four broad areas :-

1) the use of a weapon in warfare, whether it is lethal or non-lethal needs to be guided by the principle of discrimination (in order to uphold the principle of non-combatant protection due to their innocence).  Because a weapon is less than lethal the temptation to not be as concerned about discrimination and the targeting of civilians, who would normally not be combatants, is problematic.

2) the potential over use by military personnel, simply because it is easier than negotiating with hostile local populations (this has been shown to occur in the policing field)

3) the use of non-lethal weapons in torture and interrogation

4) the use of non-lethal weapons as a force multiplier, rather than a force decreasing tool.  An example of this was the Moscow Theatre Siege where an anasthetic gas was pumped in to the theatre, rendering most of the terrorists and hostages unconscious.  Russian troops then entered the theatre and instead of arresting the terrorists (as criminals), or detaining them as prisoners of war, they killed them, whilst they were unconscious.

Stephen Coleman talking about Non-lethal Weapons on www.ted.com

Animals and New Military Technology
Another area where new military technology is pushing ethical boundaries, is where existing technologies are adapted for use in animals.  The two projects that immediately come to mind are the Cyborg Cockroaches - which has now become so "mainstream" that a private company called Backyard Brains now sells "science education kits" to enable high school students to create cyborg cockroaches.
Backyard Brains Science Education Kits

In the 1960's the CIA conducted research into implanting listening devices into a cat, so that they could listen in to conversations.  The cat was fitted with an internal power source, an antenna in its tail, and a microphone in its ear canal.  There is also some evidence that the cat had probes inserted into its brain to observe "distracting" activity.  Sadly the cat was killed on its first test mission, whilst crossing the street.  The cost of the cyborg cat has been estimated to be between $6million and $20million USD, which was a HUGE amount in the 1960's.



The Ukrainian Navy began training dolphins in 2012 to not only conduct reconnaissance, but also with the potential to be armed in interacting with a "hostile enemy".  Whilst animals have been used in warfare for thousands of years, the use of animals in a more attack manner such as this raises significant issues regarding the combatant/non-combatant of animals in the battle space.

"Killer Dolphins" on Geo Beats


Lethal Autonomous Weapons Systems
 www.theguardian.com/science/2013/may/29/killer-robots-ban-un-warning


The Future of Life Institute has published an Open Letter calling for the banning of Autonomous Weapons Systems.

Here is the text of that open letter....

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

You can view the list of signatories here http://futureoflife.org/AI/open_letter_autonomous_weapons   (including Stephen Hawking, Skype founder Jaan Tallinn and Apple Co-Founder Steve Wozniak). 

Several other organisations are also calling for the banning of autonomous weapons systems, such as the Campaign to Stop Killer Robots, the International Committee for Robot Arms Control, and Article 36.

Whilst I share a lot of their concerns, I suspect that at this point lethal autonomous weapons systems are inevitable, and the debate over whether they should be developed should have happened at this level a long time ago, and not on the eve of their deployment into the field. The discussion around LAWS, whilst it is vital for the future of warfare, also seems to ignore the elephant in the room -  that is the use of unmanned aerial vehicles (also sometimes called drones) mainly by the USA in a wide variety of lethal situations, which has been outside of the normal theatre of war (usually in the name of the war on terror) and can only be described as assassinations or targeted killings. Whilst the automation of weapons systems does raise unique issues, it seems that we need to be getting right the issues regarding unmanned but not fully autonomous weapons systems first.  The people of Pakistan do not care that the drones flying overhead terrorising their children are manned or operating autonomously - the effect for them is the same.

In order to more fully understand the issues raised on these emerging technologies it is worth looking at the work of  respected ethicist Pat Lin, who was invited to speak at the UN deliberations on LAWS at the five day meeting in Geneva in April 2015 on the Convention for Certain Weapons.  A copy of Pat's presentation "The right to life and the Marten's clause" is available online to read as well as the presentations of others to the meeting. The article "Do Killer Robots Violate Human Rights" that Pat wrote for The Atlantic about these discussions and the issues that they raised is very interesting reading.