by   |    |  Estimated reading time: 4 minutes  |  in Creativity & Innovation   |  tagged ,

Elon Musk was talking sense back in 2018 when he said that AI is more dangerous than nuclear weapons. Speaking at the South by Southwest tech conference in Texas, Musk drew much-needed attention to a point which is too often overlooked: responsible AI.

The Tesla founder wasn’t the first one to warn of the potential dangers posed by the technology. Back in 2014 (an eon ago in tech terms), Prof Stephen Hawking warned that AI “could spell the end of the human race.” Hawking’s argument, though, centered around the rapid development of the technology and the possibility that, at some stage, it’s intelligence could supersede that of humans, meaning AI would re-design itself at an ever-increasing pace.

While this argument is perhaps better suited to the pages of a sci-fi novel, there is truth in both Hawking’s and Musk’s assertions. However, it’s not the technology itself we should fear, but the lack of regulation surrounding its use, and the ethical/moral drivers of those using it. In short: AI doesn’t kill people, people do.

Ethics should be at the center of AI development, and good governance, open and transparent practices, and ongoing reviews of regulation and standardization will be crucial. Fortunately, progress has been made over the past year. In April, the European Commission’s High-Level Expert Group on AI presented its Ethics Guidelines for Trustworthy Artificial Intelligence (a sentence which itself sounds better suited to a sci-fi novel). The document stipulated the need for AI to respect applicable laws and regulations; respect ethical principles and values; and be robust from a technical perspective while taking into account its social environment.

Responsible AI

Simply laying down requirements is not enough, though. This guidance needs to be implemented into a legal framework that applies across borders. Anyone who asks ‘why’ need only look at China, where AI is being wielded as a tool for surveillance and racial discrimination. As for the question of ‘when’: it’s no use developing this framework in the distant future because AI isn’t something which sits on the distant horizon – it’s in the here and now.

It’s also very much a driver of growth for UK-based AI start-ups and the businesses which adopt AI-capable solution. As such, there’s an opportunity for the UK to become a leader not just in AI R&D (we saw the opening of the Thames Valley AI Hub recently, for instance), but also in its regulation and governance. A good place to start would be linking the development and use of AI to the UN’s Sustainability Goals and ensuring that responsible AI is a core part of every businesses’ CSR activity.

Where does IFS stand in this AI landscape?

IFS has embraced AI and we’re helping our customers demystify the technology and benefit from it. This can be split into four areas: it can benefit a business in terms of how customers interact with your products or with you; it can take complex problems and work out the best way to allocate resources (so optimization); predictive maintenance; and learning from what’s worked/not worked in the past and helping businesses to adapt for the future.

And finally, we may not be setting policy and laying down legal frameworks, but we are doing our bit to build much-needed trust in AI. By demonstrating the value of AI-capable solutions for our customers, we’re helping workforces to work smarter, do their jobs better and derive value from the technology. And by automating processes we’re enabling our customers to free up time for teams to work more creatively – and do business more responsibly.

Do you have questions or comments?

We’d love to hear them so please leave us a message below.

Follow us on social media for the latest blog posts, industry and IFS news!

LinkedIn | Twitter | Facebook

Photo Credit: DuKai

Leave a Reply

Your email address will not be published. Required fields are marked *