Who_we_are

What_we_do

Technology

Customers

Partners

 

 

AI algorithms are capable of gathering and processing data from numerous different sources to aid decision making. In many circumstances AI systems can analyze a situation quickly and efficiently and make the best decision in a critical situation. As ML-based AI learns from historical data there is a danger that it will learn from biases that may exist in the database used for learning. Although the algorithm could neutralize the biases that derive from human input, there are limitations of AI as a decision maker on an ethical and human level. However, in decision making under pressure AI algorithms can play an essential role in cooperation with humans. For the combination of humans' ethical understanding and artificial intelligence's rapid analytical capabilities to accelerate decision-making, AI algorithms need to be understandable. If, as in the case of Deep-Learning, the algorithm takes a decision in an inexplicable way, there can be no effective human-algorithm collaboration.

S2-EX-AI-DED (Strategic Scenario EXplainable AI Decision Expert Doer) has been realized as a POC application to test the explainability of SHARP™ in a Command and Control context. The application of the SHARP™ neural network enables the user to recover the examples that determined the inference: this feature is fundamental for algorithm-human decision-making collaboration.

 

 

 

 

 ©2024_Luca_Marchese_All_Rights_Reserved

 

 

 

Aerospace_&_Defence_Machine_Learning_Company

VAT:_IT0267070992

NATO_CAGE_CODE:_AK845

Email:_luca.marchese@synaptics.org

Contacts_and_Social_Media