{"id":12169,"date":"2020-08-07T07:24:05","date_gmt":"2020-08-07T14:24:05","guid":{"rendered":"https:\/\/origin-www.parsons.com\/?p=12169"},"modified":"2023-08-14T10:31:59","modified_gmt":"2023-08-14T14:31:59","slug":"qrc-technologies-focus-on-ai-ethics","status":"publish","type":"post","link":"https:\/\/www.parsons.com\/2020\/08\/qrc-technologies-focus-on-ai-ethics\/","title":{"rendered":"QRC Technologies Focus On AI Ethics"},"content":{"rendered":"\n
\"\"<\/figure>\n\n\n\n

In our previous post, we looked at the technical details of our AI Guided Spectrum operations approach.  Today, in part two of our series, we will talk about our ethical approach to AI and how that approach impacts the end user\u2019s ability to trust the system behavior.<\/p>\n\n\n\n

The Need for AI in Spectrum Operations<\/h3>\n\n\n\n

New communications technologies have made the electromagnetic spectrum an increasingly dynamic environment. The expansion of cellular data transfer capabilities, the adoption of wireless Internet of Things (IoT) sensors, and the widespread introduction of self-driving\/unmanned vehicles have resulted in a monumental increase of systems and stakeholders reliant on wireless communication.<\/p>\n\n\n\n

Because of this growing complexity and dynamism, maintaining an adequate level of domain situational awareness requires systems that are equally (or, in some cases, more) dynamic and intelligent. Gone are the days when people could provide adequate awareness by manually monitoring the spectrum for emitters.<\/p>\n\n\n\n

For these reasons, QRC is dedicated to applying artificial intelligence (AI)\/machine learning (ML) best practices and fundamental research to integrate ethically sound autonomous capabilities into solutions that characterize, control, and dominate the electromagnetic spectrum.<\/p>\n\n\n\n

What Is Ethical AI?<\/h3>\n\n\n\n

As AI\u2019s influence on our decision-making and our perception of the world continues to grow, the discussion and practice of AI ethics must be prioritized. Merriam-Webster defines ethics as \u201cthe discipline dealing with what is good and bad and with moral duty and obligation.\u201d[1]<\/a> When we apply ethics to artificially intelligent autonomous and decision support systems, it\u2019s important that we realize that the moral duty, obligation, and accountability does not shift to the systems in question, but remains with the human beings that create and use the systems.<\/p>\n\n\n\n

The subject of trustworthiness is an essential part of any conversation involving AI ethics. By itself, an autonomous system is a non-moral agent that can make decisions<\/em>. To avoid negligence that comes as a result of leaving decision-making tasks to a non-moral agent, we must see these technologies as extensions of the reasoning and will of the user. In short, we must trust that the decisions made are in line with the values of the individuals and organizations employing them.<\/p>\n\n\n\n

For this to occur, we must trust the soundness of five aspects of an AI system:<\/p>\n\n\n\n