The sinking of the Titanic 113 years ago on April 14-15 was a tragic event caused by human error, as the ship collided with an iceberg. Today, advancements in technology, specifically autonomous systems utilizing artificial intelligence (AI), could potentially help prevent similar accidents. However, a key challenge lies in ensuring that these systems are capable of explaining their decision-making processes to human operators, such as ship captains. This concept, known as explainable AI, is crucial for building trust in autonomous systems and enhancing safety at sea.
Researchers from Osaka Metropolitan University’s Graduate School of Engineering have made significant strides in developing an explainable AI model for ships. This model is designed to quantify collision risks for all vessels in a given area, particularly important in increasingly congested sea-lanes. Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto have collaborated on creating an AI model that not only makes decisions based on numerical values for collision risk but also explains the rationale behind its actions. This transparency in decision-making is essential for gaining the trust of maritime workers and advancing the possibility of unmanned ships.
The researchers’ groundbreaking work on explainable AI for ship navigation has been documented in a study published in Applied Ocean Research. Professor Hashimoto emphasized the importance of being able to clarify the basis for AI judgments and behavioral intentions to build confidence among maritime professionals. By incorporating explainable AI into autonomous ship systems, the researchers hope to enhance safety measures and pave the way for a future with unmanned ships operating efficiently and securely on the open seas.
More Stories
Innovative Eco-Friendly Car Ferry Launched for Japan’s Coastal Transport
Maharashtra Launches Landmark Policy to Boost Shipbuilding and Jobs
Shipping Industry Leaders Take Edda Wind Private for Growth and Expansion