Unmanned Aerial System Safety 1251
Abstract
The research presented in this graduate capstone project for the Master of Science in Aeronautics, Unmanned Aerospace Systems concentration provides a mixed-methods analysis of unmanned aerial system (UAS) mishaps and incidents that have been classified by contributing human factors using the Department of Defense (DoD) Human Factors Analysis and Classification System (HFACS) version 7.0. Three politically and operationally diverse government incident reporting systems were coded with contributing human factors of acts, preconditions, supervision, and organizational influences to determine if formal training requirements show an association with reduced human factors mishaps. The U.S. National Aeronautics and Space Administration (NASA) Aviation Safety Reporting System (ASRS), Australian Civil Aviation Safety Authority’s (CASA) Air Transportation Safety Board (ATSB) mishap system, and the U.S. Department of the Interior (DOI) Office of Aviation Services (OAS) Safety Communiqué (SAFECOM) database were compared using Spearman’s correlation coefficient (= 0.05). The HFACS-coded ASRS reports did not result in strong positive correlation with either the ATSB or SAFECOM systems, but a strong positive monotonic relationship (69%, = 0.003) was observed between the ATSB and SAFECOM reporting systems. A qualitative analysis showed a significant outlier for the low incidence of training-related mishaps in the DOI compared to reports from ASRS and the ATSB. An industry-led training standards development and reporting process is recommended to verify the impact of formal training and encourage the safe integration of UAS into the U.S. National Airspace System.
Keywords: human factors, formal training, unmanned aerial systems, safety reporting, Australia, Department of the Interior
Project Title
Training to influence attitude and situation awareness
Approved Proposal
The research presented in this proposal will be used to make a practical training recommendation that addresses reported incidents of airspace incursions, deviation from clearances, procedural deviations, and other unsafe acts as reported by remote pilots and air traffic controllers (ATC) in the National Aeronautics and Space Administration (NASA) Aviation Safety Reporting System (ASRS) database.
Scope
The research presented in this graduate capstone individual project will examine the potential for a link between the regulations in Australia that require practical UAS training and the incidence of unsafe acts by unmanned aircraft system (UAS) operators in the United States. A human factors analysis and classification system (HFACS) will be used to standardize the data used in this research. A smaller-scale comparison will be made of manned and unmanned historical aviation incidents that occurred within the Department of the Interior (DOI), which requires practical training for any personnel that wishes to utilize UAS. Determining specific training methodologies, syllabi, or suitable training platforms is outside the scope of this research.
Objective
The objective of this research is to improve the safety and public perception of commercial UAS operations by providing examples of successful implementation of UAS training in another country and in a large domestic government organization. Training will not prevent all forms of unsafe acts by remote pilots, but the hypothesis to be tested is that training does influence a specific type of error related to attitude and situation awareness.
Get Help With Your Assignment
If you need assistance with writing your assignment, our professional assignment writing service is here to help!
Assignment Writing Service
The research will be conducted using mixed methods: utilizing HFACS taxonomy to categorize reports of unsafe acts and relate categorical outliers and testing the hypothesis with quantitative statistical analysis. The hypothesis is that a no statistically significant correlation ( > 0.05) will exist between a UAS operational system that does not require formal training and two other systems that do. To verify the hypothesis, a second hypothesis is that a statistically significant correlation will exist between two UAS operational systems that require formal practical UAS training.
Program Outcomes
PO #1: The student will be able to apply the fundamentals of air transportation as part of a global, multimodal transportation system, including the technological, social, environmental, and political aspects of the system to examine, compare, analyze, and recommend conclusions.
This program outcome will be addressed by examining, comparing, and analyzing the political difference between training requirements for the commercial U.S. FAA Remote Pilot Certificate (RPC) and Australia’s Civil Aviation Safety Authority (CASA) Remote Pilot License (RePL) and recommend conclusions. The growth of the UAS industry and its impact on the multimodal transportation system will be summarized with analysis of the technological solutions that have been proposed to reduce the social and environmental implications of a UAS colliding with a manned aircraft and the public perception of UAS as part of the global air transportation system.
PO #2: The student will be able to identify and apply appropriate statistical analysis, to include techniques in data collection, review, critique, interpretation, and inference in the aviation and aerospace industry.
Data collected from the NASA ASRS database and the CASA Air Transportation Safety Board (ATSB) accident database will be interpreted, reviewed, and critiqued to infer statistical significance (=0.05) of a federally regulated commercial training requirement on the incidence of unsafe acts by remote pilots. Information gathered from the DOI SAFECOM database will also be collected and reviewed to compare the safety actions in a formalized UAS training program.
PO #3: The student will be able—across all subjects—to use the fundamentals of human factors in all aspects of the aviation and aerospace industry, including unsafe acts, attitudes, errors, human behavior, and human limitations as they relate to the aviator’s adaption to the aviation
environment toreach conclusions.
The research presented in this graduate capstone project (GCP) seeks to determine the influence of training on the change in human behavior and awareness of human limitations related to controlling automated systems that interact with a multimodal airspace system with the objective of reducing errors, unsafe acts, and attitudes committed by professional commercial remote pilots.
PO #4: The student will be able to develop and/or apply current aviation and industry-related research methods, including problem identification, hypothesis formulation, and interpretation of findings to present as solutions in the investigation of an aviation/aerospace-related topic.
Using the NASA ASRS database to plot the increase of reported UAS incidents in the U.S., this research will identify a problem that will be addressed through interpretation of the findings in other POs. The preliminary hypothesis is that requiring practical training in a UAS system leads to safer operations and a reduction in unsafe acts and errors. Analysis of the CASA-required RePL training and DOI UAS training program will be interpreted as findings to present as solutions.
PO #13: Unmanned Aerospace Systems. The student will investigate, compare, contrast, analyze and form conclusions to current aviation, aerospace, and industry- related topics in unmanned aircraft and spacecraft systems, including UAS systems, robotics and control, unmanned systems operations and payloads, and human factors in unmanned systems.
The research presented in this GCP will address the current topic of unmanned aircraft incursion into restricted or unauthorized airspace, causing a danger to the national air transportation system. UAS systems that utilize concepts in robotics and control such as autonomous navigation, teleoperation, and integration of payloads such as sense-and-avoid technology will be described as methods of improving the human factors aspect of situation awareness during UAS operation in complex airspace systems.
Project Introduction
Civil commercial UAS are a large area of growth in the U.S. national airspace system (NAS), and safety of operations is the primary goal of successful integration. As UAS become more integrated into a shared airspace system, avoiding airspace conflicts and near midair collisions (NMAC) becomes a priority. As the number of UAS operations grows, formal training is needed in safe operations and use of controlled airspace. As of February 1, 2018, there are 73,157 commercial RPC holders in the United States (Federal Aviation Administration [FAA], 2018a) and 1,371 waivers granted to RPC holders to operate within controlled airspace near airports (FAA, 2018b). The Federal Aviation Administration (FAA) Center of Excellence for UAS Research (COE) published a report in 2017 to determine the potential damage that would occur if a 1.2 kg (2.7 lb) quadcopter UAS impacted a commercial transport jet or a business jet and determined a UAS impact would cause more damage than a bird strike (Alliance for System Safety of UAS through Research Excellence, 2017). To date, one documented mid-air collision between a UAS and a manned aircraft has occurred within the U.S. NAS, when a recreational UAS pilot flew a small quadcopter UAS beyond visual line of sight (VLOS) and into the path of a military UH-60M helicopter (National Transportation Safety Board, 2017). Worldwide, five additional mid-air collisions between manned aircraft and UAS have occurred since 1997 with one of them resulting in fatalities (Australian Transport Safety Bureau [ATSB], 2017).
U.S. Commercial UAS Requirements
By law in the U.S., all civil aircraft operation requires a certificated pilot, a certified and registered aircraft, and operational approval; commercial UAS operation within the U.S. was originally authorized through Section 333 of the FAA Modernization and Reform Act (FMRA) of 2012which gave the FAA authority to determine if a UAS operation required an airworthiness certificate. UAS operators applied for waivers from existing regulations of certificated aircraft, and the commercial UAS industry could proceed within U.S. national airspace. As of September 2016, 5,551 individuals and organizations have received waivers to operate under Section 333 (FAA, 2016a). All of them still require a manned aircraft pilot certificate to operate UAS, as Section 333 of the FMRA did not provide the authority to waive the pilot certificate requirement.
Section 332 of the 2012 FMRA required the Secretary of Transportation to develop a plan to safely accelerate the integration of civil UAS into the NAS (FAA Modernization and Reform Act of 2012). A Notice of Proposed Rule Making (NPRM) was issued on February 23, 2015 that proposed regulations to address UAS operations, remote pilot certification, and aircraft registration. 4,708 comments to the NPRM were received (Operation and Certification of Small Unmanned Aircraft Systems, 2015). The final rule was released on June 28, 2016 as Title 14, Code of Federal Regulations (14 CFR) Part 107. The preamble to the final rule responded to the NPRM commenters in every topic. Required formal training was discussed, noting that many knowledge areas and skills required to fly a manned aircraft do not apply to UAS; specifically, “how to maintain visual line of sight of the unmanned aircraft or how to respond when signal to the unmanned aircraft is lost” (Operation and Certification of Small Unmanned Aircraft Systems, 2016a). Due to the VLOS containment of UAS under Part 107, the FAA took a risk-based approach to airman certificates for remote pilots and decided not to “propose specific training, flight experience, or demonstration of proficiency in order to be eligible for a certificate” (Operation and Certification of Small Unmanned Aircraft Systems, 2016a).
Australian Commercial UAS Requirements
Commercial UAS in Australia are regulated by CASA under Civil Aviation Safety Regulations (CASR) 1998, Volume 3 Part 101, Subpart 101.F (2017). As of September 2016, there were 5,780 commercial RePL holders in Australia (Civil Aviation Safety Authority [CASA], 2017b). Unmanned aircraft that weigh less than 2 kg may operate under an “excluded” category that includes notification prior to flight and operation within a strict list of standard operating conditions that mirror most of the same limitations of the FAA Part 107 regulations without requiring airman certification (CASA, 2017a). A summary of operating conditions and regulations that are common between FAA Part 107 and CASA Part 101 “excluded” category is located in Appendix A, Figure A1.
To operate unmanned aircraft over 2 kg or outside of the excluded category standard operating conditions, remote pilots must be authorized through a RePL which requires practical type training with a Remotely Piloted Aircraft System (RPAS) training provider. Specific required competency-based skills include navigating an RPAS, launching an RPAS, controlling an RPAS in normal and abnormal flight conditions, recovering an RPAS, and applying situational awareness in RPAS operations (Australian Government, 2016). In turn, RePL holders may only fly for authorized operators with a RPA operator’s certificate (ReOC). To operate with a ReOC, a safety case and comprehensive risk assessment must be completed for every operation that requires approval under Part 101. ReOCs are issued by RPA categories of multi-rotor, fixed-wing, and helicopter type aircraft across four different weight classes (CASA, 2018a). As of February 15, 2018, there are 1255 ReOC holders in Australia (CASA, 2018b).
Overview of NASA Aviation Safety Reporting System
ASRS is a database of voluntary reports by pilots, ATC, mechanics, and flight attendants of unsafe acts or behaviors that lead to a near-miss of an accident or an inadvertent violation of aviation regulations. To encourage participation, pilots who self-report near-misses or deviation from regulations or ATC instructions to the ASRS are assumed to have a “constructive attitude” and will tend to prevent future violations, so the FAA will not pursue enforcement action (FAA, 2011, p. 4). Given the voluntary nature of the ASRS, the reports are based on the interpretation of the individual. As a result, the sample of airmen represented in the ASRS database cannot be assumed to be part of a normally distributed population. However, the presence of incidents and violations does represent the presence of unsafe occurrences, and the ASRS data can be considered as a quantifiable lower bound of unreported occurrences (Connell, 2017).
Overview of DOI Safety Communique
The DOI Office of Aviation Services (OAS) provides aerial support for DOI agencies such as the National Park Service and the U.S. Geological Survey. OAS aircraft perform a multitude of operations including aerial fire suppression, wildlife surveys, search and rescue, and orthographic mapping. As such, the OAS maintains comprehensive operational procedures that apply to all use of UAS. All UAS operations fall under FAA Part 107, but the OAS also prescribes UAS aircraft airworthiness, preventative maintenance programs, safety hazard reporting, and a required initial UAS training course (A-450 Basic Remote Pilot) that includes an evaluation by a designated UAS Pilot Inspector (Bathrick, 2018).
The vision of the OAS is to “attain and sustain zero aircraft accidents across DOI” (Department of the Interior [DOI], 2018, para. 1). A practical aspect of this vision is the department-wide utilization of the DOI Safety Communique (SAFECOM) reporting system. Early studies by the DOI OAS indicated a correlation between reporting of all unsafe incidents, occurrences, and hazards, and the fleet-wide mishap rate. The SAFECOM database is also used to provide “kudos” to operators who react to unsafe situations in an exemplary manner (DOI, 2017a). The database is searchable without DOI credentials.
Review of Relevant Literature
Investigations of causal human factors in aviation incidents rely on a consistent and thorough taxonomy. A significant contribution to human factors research was made in 2000 by Shappell and Wiegmann, who assessed the utility of a standardized HFACS. The original HFACS was developed and tested by the U.S. military to aid aviation accident analysis, and the work of Shappell and Wiegmann sought to determine the suitability of the HFACS in the civilian commercial industry. Since the HFACS addresses human error at every level of the system, it enables companies and regulatory bodies to make interventions at the organizational level. Later studies by Kevin Williams of the FAA Civil Aerospace Medical Institute examined the suitability of the HFACS for analysis of UAS human factors mishaps.
UAS accident data was not available until the FAA began regulation of commercial UAS operations, so earlier research efforts relied on data from military UAS accidents. Williams (2004) analyzed data from the U.S. Army, Navy, and Air Force to classify accidents into categories that included human factors, maintenance, aircraft, and unknown. Of the human factors-related accidents, further categories were applied: alerts/alarms, display design, procedural error, skill-based error, or other. The data was classified by stated factors in the accident reports and the personal judgment of Williams. The difficulty with utilizing data across three branches of military flight operations was the vastly different philosophies of UAS operation, user interfaces, procedures, and aircrew training. The percentage of human factors-related accidents varied by aircraft type, between 21% and 68% (Williams, 2004). However, Williams’ study did not utilize a formal HFACS.
The framework of the HFACS developed by Wiegmann & Shappell and adopted by the Department of Defense primarily existed to determine latent failures in organizational influences, unsafe supervision, and unsafe preconditions that would create an environment predisposed to human error, specifically, unsafe acts. Tvaryanas, Thompson, and Constable (2005) reviewed six prior studies of UAS human factors and discovered the lack of a common classification framework between the studies, with the exception of one study that utilized the Wiegmann & Shappell HFACS, but only analyzed U.S. Army UAS data (Manning, Rash, LeDuc, Noback, & McKeon, 2004). Tvaryanas et al. then reviewed ten years of UAS accident data, coding the human factors-related accidents using the Department of Defense (DoD) HFACS (version 5.7). Again, causes for unsafe acts were found to vary widely between branches of the military. Skill-based errors were more common in Air Force mishaps, while procedural violations were more common in Army operations. The overall conclusion showed that latent failures at the organizational, supervisory, and preconditions for operation contributed to more than half of UAS mishaps as analyzed (Thompson, Tvaryanas, & Constable, 2005). The results of this study further validate the use of the HFACS for UAS incident analysis.
Classification systems used to make safety assessments need to be updated on a regular basis to ensure validity and efficacy of the ratings. DoD HFACS version 7.0 was developed and validated in 2015 by King, Strongin, Lawson, and Kuhlmann of the Air Force Research Laboratory, 711th Human Performance Wing. The objective of the HFACS update was to improve inter-rater reliability which conserving compatibility with existing databases. The HFACS prior to 2013 contained 149 rating codes, several of which were rarely or never used. HFACS rating codes were condensed and re-written, and a stepwise checklist was developed to systematically guide investigators and researchers through the consideration of “nanocodes” (King, Strongin, Lawson, & Kuhlmann, 2015). The re-worked HFACS was validated through several test iterations of the newly formed checklist. The final conclusion acknowledged that an “optimally reliable” human factors taxonomy was still yet to be realized, but the development of a guiding checklist provides a reliable framework to use a common HFACS in rating different databases. A flowchart adapted from the HFACS v. 7.0 checklist is located in Appendix A, Figure A2.
The validity of a reporting system is only confirmed if the data is accurate. Data from self-reported databases can only be considered as a lower bound of known incidents, in part because unsafe acts may go unreported if the outcome is innocuous. Gilbey, Tani, and Tsui (2016) performed two qualitative studies to measure the likelihood of reporting unsafe acts. The first study measured participant’s responses to descriptions of unsafe acts; bad outcomes were much more likely to be reported. In the second study, prompts were given to influence the likelihood of reporting unsafe acts, and evidence was shown that explicit prompts increased the likelihood of reporting unsafe acts with innocuous outcomes. This study highlights the difference between the implicit nature of option ASRS reports and the explicit reporting requirement of the SAFECOM database.
A significant number of reported near mid-air collisions have been voluntarily reported to the ASRS by UAS crewmembers, ATC, and manned aircrew. Sharma (2016) analyzed UAS incidents in the NAS using descriptive statistics, determining the frequency of reports containing accounts of airspace violations and NMAC by UAS in the NAS. Data were obtained from the NASA ASRS, the FAA Aviation Safety Information Analysis and Sharing (ASIAS) system, FAA NMAC system, and FAA-released UAS Sightings Reports. An upward trend in overall UAS-related reporting was noted. Class B airspace was the location of most UAS-related events, with California and New York being the highest-reported regions for UAS incidents (Sharma, 2016). Several UAS management technologies were offered as recommendations for reducing the number of safety incidents. Management techniques included geo-fencing, real-time flight planning applications, a proposed UAS Traffic Management (UTM) system, and ground-based sense and avoid (GBSAA) systems such as TIS-B and ADS-B (Sharma, 2016). Recommendations for performance- or risk-based standards were outside the scope of the research. The data was categorized by all UAS events, not specific to unsafe acts by pilots. While providing a valuable summary of UAS incidents and potential technological solutions, the data is still open to analysis of specific occurrence categories to further determine if technology would positively impact safety or if a more holistic approach is necessary.
A characteristic of modern commercial UAS is the highly-automated nature of the navigation system. Teleoperation provides the least amount of automation, but sensors and controllers provide autonomous navigation capability for a greater level of precision. However, taking a passive human-on-the-loop approach to control has human factors implications in the ability to efficiently detect and react to system failures, as studied by Molloy and Parasuraman in the 1996 edition of Human Factors. Three groups were tasked with performing complex flight simulation tasks that included tracking aircraft location, managing fuel load, and monitoring engine status. The summary of the study showed that “automation-related monitoring inefficiency occurs even when there is a single automation failure” (Molloy & Parasuraman, 1996, p. 311). The poorest performance in automation monitoring occurred when the participants were simultaneously engaged with a manual task. The research also was the first to empirically highlight a “vigilance decrement” for automation monitoring, where vigilance was lower in the final ten minutes of the simulator flights than in the first ten minutes (Molloy & Parasuraman, 1996). However, the research was performed with participants that did not have prior experience with the tested automation tasks, leading to a recommendation for replicating the research with trained or experienced pilots. Although the research was not followed up in this regard, the acknowledgment of an impact on vigilance through experience and training provides a compelling area of research in UAS training.
In 2002, the U.S. Air Force studied the impact of flight experience on the speed and accuracy of pilots to learn to fly the RQ-1A Predator UAV (Schreiber, Lyon, & Martin, 2002). Seven groups of military and civilian pilots were trained to fly the RQ-1A with experience including qualified RQ-1A pilots, T-38 pilots, civilian pilots with single-engine instrument experience, and non-pilots. The results showed that civilian pilots with recent experience scored nearly as well as military T-38 pilots with recent experience on performance. The results showed that 150-200 hours of previous flying experience was sufficient to prepare a pilot for RQ-1A training (Schreiber et al., 2002).
The FAA utilized a risk-based approach to developing Part 107 regulations. Baldwin and Black (2008) in the journal Law and Policy advocated a responsive risk-based regulatory environment. A risk-based approach to regulation uses an “evidence-based means” to prioritize resources and attention to the highest potential risks. Prioritizing resources permits regulators to respond to issues with time and resources than would be required to respond to all issues. Baldwin and Black (2008) developed a framework to advise regulators with methods of responsive risk analysis. The elements that regulators should be responsive to include the regulated firms’ behavior, attitude, and culture; the institutional environment; regulatory performance; and change (Baldwin & Black, 2008). Using this framework sheds light on the efficacy of the FAA’s analysis in developing Part 107 regulations that did not include a practical training element.
A slightly different approach to measuring the effectiveness of human monitoring of automation was used by Williams (2012), who examined the effect of sensory input on pilot reaction to UAS system failures. Aspects of the study also examined pilot experience and the level of automation. The greatest effect was gained from the presence of sound in reacting to engine failures. Contrasting Molloy & Parasuraman (1998), higher levels of automation did not lead to vigilance decrements (complacency). Additionally, pilots performed better than non-pilots during simulated manual control conditions, suggesting that previous flight experience makes a difference, although Williams stopped short of concluding the differences were due to aircraft training and flight experience. Although the results did not conclusively suggest a significant impact of practical training in UAS operations, it does highlight the need for further research in this area.
Situation awareness and the lack thereof is a large component of the commercial UAS incidents from the NASA ASRS database. Situation awareness (SA) was defined by Endsley (1988) as the perception of elements of the operating environment, the comprehension of their meaning, and the projection of their status in the near future. However, Drury, Riek, and Rackliffe (2006) suggest that Endsley’s widely-accepted definition of SA assumes that the human is the only intelligent part of the system. In control of robotic systems such as UAS, the robots need information about the environment as well as operating instructions. The human operator is just one of the many parts of the UAS and is not the primary source of sensory data. Drury, Riek, and Rackliffe (2006) studied the UAS-related SA for the three roles that also appear in NASA ASRS reports: UAS operators, ATC, and pilots of manned aircraft. The ASRS reports often cite only a “lack of SA” without defining the types of awareness that need to be aided for better operational safety. Through their observations, the researchers developed recommendations for better human-UAS interfaces to increase the fidelity of UAS SA. This research was important because it suggests spatial awareness is more an issue of human-machine interface than it is for training.
Methodology
To determine if a monotonic relationship exists between the ASRS, ATSB, and SAFECOM databases, they require classification with a common taxonomy. The DoD HFACS v. 7.0 checklist (King et al., 2015) and adapted flowchart located in Appendix A, Figure A2, was used to process the reports in each of the databases and determine the contributory human factors that may have influenced the actions of the operator. 17 codes are used to categorize human factors events by operator acts, preconditions, supervision, and organizational influences, with each code containing specific nanocodes for further classification. A summary table of the coded databases is located in Appendix B, Table B1. An a priori power test of the bivariate correlation performed using G*Power (Faul, Erdfelder, Buchner, & Lang, 2009b) showed that total sample size could remain below 19 for a strong positive correlation (rs > 0.5), so nanocodes were not used to further process the data.
ASRS Analysis
Although the ASRS data is self-reported, it is de-identified and analyzed by NASA to allow the database to be searchable by event type and operator type. As of February 10, 2018, 100 ASRS results appeared in a search of the database with aircraft type filtered by “UAS” (National Aeronautics & Space Administration, 2018). To process the database for comparison with the civil ATSB database, the data set was downloaded in Microsoft Excel spreadsheet format and further filtered. Data from reports concerning military UAS were excluded. Reports by observers that were not part of the UAS operation were not included because human factors could not be reliably determined from a third-person narrative. Reports containing irrelevant data such as personal disagreement with current regulations were also rejected. The resulting reports provided a sample set of 30 civil UAS incidents that included a combination of unsafe acts, errors, mishaps, and lost link events. The user narratives and NASA synopses of causal factors were coded with the HFACS v. 7.0 checklist to categorize each reported event for statistical analysis. A summary of coded ASRS reports can be found in Appendix B, Table B1.
ATSB Analysis
Civil aviation mishap investigations in Australia are performed by the ATSB. Results are published in a summary database that can be filtered by operation type, occurrence class, aircraft model, and more (ATSB, 2018). To obtain reports of UAS mishaps and incidents, the database was f