Artificial Intelligence & Machine Learning
Why AI Regulations Are Needed to Check Risk and Misuse
Panelists at ISMG's Southeast Asia Summit Discuss AI Risks, Ethics and RegulationIn Part 2 of a panel discussion at ISMG's recently concluded Southeast Asia Summit, three seasoned IT leaders delved into data privacy issues and spoke about the risks associated with AI models. They alluded to AI and data privacy regulation in Saudi Arabia, the UAE and India.
See Also: Endpoint Security Essentials for the C-Suite: An Executive's Digital Dilemma
The three panelists who participated in a the panel titled "Gen AI: Impressive Enough to Be Dangerous" were Gigi Mathew Thomas, group director -– IT & Digital Transformation, Ittihad International Investment; Venkatesh Mahadevan, former CIO, Dubai Investments PJSC, and Awais Ahmad, director of information technology at SolexPLUS.
Thomas, Mahadevan and Ahmad acknowledged that it is crucial to address the legal and ethical aspects of AI before adopting the technology for business use.
It's not only businesses that are exploring ways to harness AI and large language models, the bad actors are also tinkering with it for malicious use cases, including deep fakes. They could also exploit generative AI to write convincing phishing emails and even malicious code. AI can also assist hackers in looking for vulnerabilities in an enterprise network.
"The world has started realizing the risks and dangers of AI. The bad guys will use ChatGPT and generative AI to stay ahead of the good guys," Ahmad said.
AI Risks
Adopting a new technology poses certain risks, especially if it has not been previously deployed. That calls for certain risk mitigation strategies, such as testing, sandboxing, proof of concepts, and taking smaller steps such as minimum viable product, before complete adoption.
Mahadevan believes there will always be risks and that we "amplify the risk" to a large extent today.
"Companies need to follow a framework and put together a risk mitigation panel, rather than focus on the risk itself. I insist that AI and the risk mitigation should become a part of the blueprint. And this is not a job for a CIO alone, it is a job for a CHRO, the risk manager, and for operations," Mahadevan said.
Deep fake and the violation of one's privacy is a hotly debated topic in the industry today. Thomas said deep fake will lead to many scams, causing victims to lose a lot of money. It is also a violation of one's privacy, and poses a substantial risk at an individual level.
Deep fake technology uses a form of artificial intelligence called deep learning to create convincing videos, photo or audio clips of a subject, which are used for misinformation campaigns or to defraud/deceive relatives or friends. Video or audio clips are created by analyzing hundreds of original samples to create fake clips.
AI and data privacy laws, the panelists agreed, are needed to prevent the widespread use of deep fakes with malicious intent. The panelists cited initiatives taken by governments in Saudi Arabia, the UAE, India and the State of California.
AI and Data Privacy Laws
In April, the European Commission proposed the first EU regulatory framework for AI. According to the framework that is now an Act and will soon be passed as a law, AI systems that can be used in different applications are analyzed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.
The Kingdom of Saudi Arabia has taken several initiatives to ensure AI governance, regulation and data privacy, while its neighbor, the UAE, is also making similar moves toward regulation.
Last year, the Saudi Data and Artificial Intelligence Authority, SDAIA announced AI Ethics Principles for public consultation: fairness, privacy and security, humanity, social and environmental benefits, reliability and safety, transparency, accountability and responsibility. SDAIA said these ethics are a practical guide to incorporating AI ethics throughout the AI system development life cycle.
Additionally, Saudi Arabia also has a personal data privacy law, PDPL, that regulates data usage. Saudi Arabia has also drafted a new intellectual property law and one chapter is devoted to artificial intelligence.
The UAE Council for Artificial Intelligence was recently formed to deliver artificial intelligence strategic goals across the UAE. One of its eight objectives is to ensure strong governance and effective regulation through a legislative environment.
Commenting on this development, Thomas said, "If emphasis is given to privacy, security and anti-discrimination, then the frameworks that these regulatory agencies are offloading are very closely met."
In Part 1 of the article, the panel delved into business use cases for generative AI that exist today. All of them agreed that companies can build better AI models by sharing ideas and models through a unified platform.