The views and opinions expressed or implied in WBY are those of the authors and should not be construed as carrying the official sanction of the Department of Defense, Air Force, Air Education and Training Command, Air University, or other agencies or departments of the US government or their international equivalents.

When AIs Fail to Act Human: A Legal Framework to Determine the Reasonableness of Artificial Intelligence Actions

Wild Blue Yonder --

Introduction

Imagine your boss has just requested that you take a package to a customer and instructs you to utilize the company’s self-driving car to carry out this feat. Your boss hands you the package before giving you the customer’s address to input it into the car’s navigational system, which you do. The car carefully drives out of the parking lot and onto a nearby road, but as it approaches an intersection, you realize an overgrown shrub is obstructing the stop sign. The car fails to stop and proceeds through the intersection.

Shortly after, the self-driving car enters an on-ramp for a busy interstate. You observe a temporary construction area in the process of being dismantled, but clearly speed limits are not impacted because cars are proceeding at normal speeds. Before workers can remove a temporarily posted sign indicating a much lower speed limit, your car reduces its speed.Transiting vehicles quickly swerve around you in order to avoid an accident and honk their horns disapprovingly.

As you exit the interstate, you approach a traffic intersection and see the traffic light is red. Your car fails to stop and proceeds right into oncoming traffic, almost colliding with a school bus. You later find out someone, in an attempt to create public distrust in all self-driving vehicles, placed a translucent piece of tape over the traffic light. The tape did not impact the color of the light for drivers but did cause self-driving cars to interpret the color of the traffic light as green.

When you finally approach the customer’s address, the customer is waiting and walks out to greet you. The self-driving car, however, down not slow-down in spite of the presence of the customer and, instead, runs into them, causing significant injuries. You call 911 and the injured customer is subsequently transported to the hospital by ambulance.

Legal Framework: A Four-Prong Analyses

This scenario contains several instances in which the self-driving car’s Artificial Intelligence system (AI) failed to act in accordance with how, generally, a human would have acted or would have known how to act. Each point of failure invokes a different level of liability or culpability, and potential application of law. In order to evaluate the reasonableness of an AI’s actions, surveyors should turn to a four-prong analysis: (1) was the information received by the AI prima facie true; (2) did the AI know how to interpret the information; (3) was the information received by the AI honest; and (4) were the actions of the AI reasonable? These four prongs should help determine not only what went wrong, but where to allocate fault, and what area of law should apply in response to facts surrounding that failure.

Was the information received by the AI prima facie true?

This question intends to answer whether, at face value, the information received by the AI was what it appeared to be. It does not intend to answer whether the information was honest. To illustrate this point, think about the following: you paint a tree’s leaves purple. Later, a passerby observing your unique creation states, “Look at that! I’ve never seen a tree with purple leaves.” If you were asked whether the individual saw a tree with purple leaves (was the information received prima facie true?), the answer would be yes because the tree does appear to have purple leaves, even though the color of the leaves is not honest. If the individual looked at the tree and said, “Look at that tree with blue leaves!” then the information received by the individual would be prima facie false.

In the self-driving car scenario, examine the first instance of AI failure or when the self-driving car does not stop for the stop sign due to the overgrown shrub. On its face, the visual information presented to the self-driving car indicated that there was no stop sign; so, the information received by the self-driving car was prima-facie true—it appeared there was no stop sign. Thus, we can answer this question in the affirmative. Now assess the scenario’s instance of AI failure involving the traffic light. The light appeared to be red, but the car interpreted the light as green. In this instance, the answer to the question of whether the information received by the self-driving car was prima facie true is no. The light appeared to be red, but the car did not receive that information properly.

Did the AI know how to interpret the information?

After determining if the AI received prima facie true information, it must be decided whether it knew how to interpret that information. For instance, if the self-driving car had been imported from China and contained programming language only housing information about China’s traffic rules, then the self-driving car may not have known how to interpret American road signs or traffic lights. In our scenario, despite the stop sign obscured by the overgrown shrub, perhaps the self-driving car had been programmed with certain reasoning algorithms based on additional visual information it received (it was at an intersection, there were signs posted in other directions indicating it was a 2-way or 4-way stop, etc.) that should have resulted in the car stopping because it knew how to interpret that additional information. If such was the case, further investigation would be necessary to explain why the AI failed to act in agreement with its programming.

Even when an AI knows how to interpret information, a question may still arise about whether the AI should have interpreted the information in the way that it did, in light of the present circumstances. When the self-driving car failed to proceed at normal interstate speeds as it approached temporary road construction, the AI obviously interpreted the posted speed limit sign correctly, but there may be some argument about whether it should have interpreted the speed limit sign in that way. Should the self-driving car’s AI system have been programmed to mimic the action of the vehicles around it? Should it have been programmed to anticipate when a human was going to remove a speed limit sign? These are questions that can be explored when determining whether the AI knew how to interpret the information at all.

Was the information received by the AI honest?

If the information received by the AI was what it appeared to be, and the AI knew how to interpret that information, then why did the AI fail to act in accordance with that information? The purpose of the third prong is to determine whether an intervening force manipulated the actions of the AI by presenting fraudulent information. In the self-driving car scenario, the car approached a red traffic light, which it had presumably been programmed to understand how to interpret; yet the car failed to interpret the red traffic light as it appeared. The scenario explains that the light had been tampered with in such a way to cause the self-driving car to interpret the signal in a different way from the way it appeared. Yet, the self-driving car is not completely absolved of responsibility based on that information alone.

If the answer to this prong is no, a sub-analysis should be explored: (1) did the AI know how to distinguish false information and, if not, (2) should the AI have known how to distinguish false information? If the AI knew how to distinguish the false information, but failed to conform its behavior accordingly, it is likely that the culprit is a defect or malfunction. Whether or not the AI should have known how to distinguish the false information is a foreseeability question that points to a failure in the AI’s design.

Were the actions of the AI reasonable?

In law, the customary way to measure whether an act perpetrated by a human was reasonable is called the “reasonable person standard.” This standard is often used in criminal, tort, and contract law as a way to determine negligence and intent. It is a legal doctrine positing that, in any given situation, each person has a duty to behave as a reasonable person in the same or similar circumstances. Whether an action is reasonable is often considered through two lenses: subjective and objective. The subjective lens examines the action from the point of view of the actor. The objective lens examines the action from the point of view of the audience.

The purpose of the first three prongs of the analysis is to help determine the answer to the fourth prong. For instance, in the self-driving scenario, the car received prima facie true information when the customer came out to receive their package—there was actually a person in the path of the car. The AI should have been programmed to interpret that information (though consultation might be necessary to confirm), and the information was honest—there was actually a person in the path of the car. The question of whether the actions of the AI were reasonable must then be assessed. In this instance, the reasonableness is clear: both from the point of view of the self-driving car, and from the point of view of the audience, the actions of the self-driving car (its AI system) are unreasonable.

Once a determination has been made that the self-driving car acted unreasonably, understanding why the car acted unreasonably can then be unpacked. For instance, in the injured customer scenario, one might inquire whether the car recognized that there was a person in its path. If not, was it because the car was not programmed correctly? If so, then this can likely be chalked up to a design issue; liability would fall on the engineers who should have known that people often walk into roadways for various reasons. If the car was programmed to recognize there was a person in its path, did it not stop because of a malfunction? If so, then the manufacturer might be at fault. What if the customer darted out in front of the car while it was still moving? In that instance, then the customer might be liable. What if someone hacked the self-driving car’s programming and disabled a feature in the car to recognize persons in the road? Then the hacker would be criminally liable. The engineer of the car could also be civilly liable, if the engineer knew about a weakness in the AI’s design that caused it to be susceptible to a potential risk of hacking.

The Four-Prongs in Relation to DOD AI Ethical Principles

In relation to the Department of Defense (DOD), understanding and exploring these four prongs could also prove or disprove adherence to ethical principles recently recommended by the DOD. These recommendations insist that the DoD’s use of AI follow five main principles:

  1. Responsible: DOD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable: The DOD will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable: The DOD’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable: The DOD’s AI capabilities will have explicit, well-defined uses and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life cycles.
  5. Governable: The DOD will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.1

As you work through the self-driving car scenario above and begin to answer questions about whether the car acted reasonably in each instance of failure, you will find that many of the answers will prove adherence to, or divergence from, these ethical principles. For instance, in the scenario, the self-driving car demonstrated that it was not reliable when it did not stop at the stop sign or traffic light. But what about the principle of governable? Traceable? We don’t have all the information from the scenario to determine those principles, but the four-prong analyses is an appropriate place to start investigating.

Conclusion

The sooner we know how to determine the reasonableness of an AI’s actions, the sooner we can develop AI in accordance with that understanding. The proposed four-prong analysis is a simplistic legal framework to assist AI engineers in understanding the requirements necessary for an AI to function legally and seamlessly in a human setting. Ensuring AIs behave reasonably in everyday situations promotes trust in AI operations, and trust in AI is an area in need of attention and cultivation. AIs do not need to act perfect, and humans do not need to like or love AI to trust it; they only need to know that AIs will act reasonably. The more consistently an AI behaves reasonably—i.e., more human—the more its incorporation into every day human life will become readily accepted.

Captain Christina Heath, USAF

Captain Christina Heath (Juris Doctor, Florida Coastal School of Law) is the Chief of Military Justice at the 78th Air Base Wing, Robins Air Force Base, Georgia. She manages a nine-person justice team and oversees the administration of all punitive and administrative disciplinary actions involving military members assigned to the installation.
This paper was written as part of the SOS Air University Advanced Research (AUAR) elective, Artificial Intelligence section.

Notes

1 Defense Innovation Board, “AI Principles: Recommendations of the Ethical Use of Artificial Intelligence by the Department of Defense,” visited 15 December 2020, https://media.defense.gov/.

USAF Comments Policy
If you wish to comment, use the text box below. AF reserves the right to modify this policy at any time.

This is a moderated forum. That means all comments will be reviewed before posting. In addition, we expect that participants will treat each other, as well as our agency and our employees, with respect. We will not post comments that contain abusive or vulgar language, spam, hate speech, personal attacks, violate EEO policy, are offensive to other or similar content. We will not post comments that are spam, are clearly "off topic", promote services or products, infringe copyright protected material, or contain any links that don't contribute to the discussion. Comments that make unsupported accusations will also not be posted. The AF and the AF alone will make a determination as to which comments will be posted. Any references to commercial entities, products, services, or other non-governmental organizations or individuals that remain on the site are provided solely for the information of individuals using this page. These references are not intended to reflect the opinion of the AF, DoD, the United States, or its officers or employees concerning the significance, priority, or importance to be given the referenced entity, product, service, or organization. Such references are not an official or personal endorsement of any product, person, or service, and may not be quoted or reproduced for the purpose of stating or implying AF endorsement or approval of any product, person, or service.

Any comments that report criminal activity including: suicidal behaviour or sexual assault will be reported to appropriate authorities including OSI. This forum is not:

  • This forum is not to be used to report criminal activity. If you have information for law enforcement, please contact OSI or your local police agency.
  • Do not submit unsolicited proposals, or other business ideas or inquiries to this forum. This site is not to be used for contracting or commercial business.
  • This forum may not be used for the submission of any claim, demand, informal or formal complaint, or any other form of legal and/or administrative notice or process, or for the exhaustion of any legal and/or administrative remedy.

AF does not guarantee or warrant that any information posted by individuals on this forum is correct, and disclaims any liability for any loss or damage resulting from reliance on any such information. AF may not be able to verify, does not warrant or guarantee, and assumes no liability for anything posted on this website by any other person. AF does not endorse, support or otherwise promote any private or commercial entity or the information, products or services contained on those websites that may be reached through links on our website.

Members of the media are asked to send questions to the public affairs through their normal channels and to refrain from submitting questions here as comments. Reporter questions will not be posted. We recognize that the Web is a 24/7 medium, and your comments are welcome at any time. However, given the need to manage federal resources, moderating and posting of comments will occur during regular business hours Monday through Friday. Comments submitted after hours or on weekends will be read and posted as early as possible; in most cases, this means the next business day.

For the benefit of robust discussion, we ask that comments remain "on-topic." This means that comments will be posted only as it relates to the topic that is being discussed within the blog post. The views expressed on the site by non-federal commentators do not necessarily reflect the official views of the AF or the Federal Government.

To protect your own privacy and the privacy of others, please do not include personally identifiable information, such as name, Social Security number, DoD ID number, OSI Case number, phone numbers or email addresses in the body of your comment. If you do voluntarily include personally identifiable information in your comment, such as your name, that comment may or may not be posted on the page. If your comment is posted, your name will not be redacted or removed. In no circumstances will comments be posted that contain Social Security numbers, DoD ID numbers, OSI case numbers, addresses, email address or phone numbers. The default for the posting of comments is "anonymous", but if you opt not to, any information, including your login name, may be displayed on our site.

Thank you for taking the time to read this comment policy. We encourage your participation in our discussion and look forward to an active exchange of ideas.

Wild Blue Yonder Home