Look Beyond Big Techs to See the Real Value of AI

IT Leaders Are Advised to Launch Cross-Functional Projects for AI Maturity
Look Beyond Big Techs to See the Real Value of AI
Tarry Singh, CEO and founder, deepkapha.ai

Research documents and studies on future technologies have all predicted the emergence and importance of artificial intelligence (AI). Currently, various organizations are at different stages of AI experimentation. Those that are deriving the most benefits from AI are already in a phase of industrializing its capabilities - they are not spending more but investing intelligently.

AI is continuously evolving. AI-based systems can now not only accurately predict customer behavior and buying pattern, but can also detect cyberthreats. As a result, most detection and threat intelligence technologies are embracing and embedding AI into their solutions. But the widespread adoption of AI could result in imminent risks.

Tarry Singh, CEO, founder and AI neuroscience researcher, and Dr. Anwita Maiti, visiting AI ethics researcher at AI startup deepkapha.ai, discuss apprehension about AI adoption, how CIOs and CTOs can manage AI risks, and adoption of AI in healthcare industry.

Edited excerpts follow:

Gartner says it will take until 2025 for 50% of organizations worldwide to reach the "stabilization stage" in AI maturity. Why?

Tarry Singh: Any transformative technology, if understood well by businesses, takes at least 10-15 years before being adopted at a reasonable scale. Twenty years ago, virtualization - which powers every cloud service today - wasn't the same. Cloud computing did not exist in the late 1990s, and virtualization was a niche technology. The first thing businesses encounter is the "wow" factor because researchers and engineers supply something interesting and breathtaking.

Virtualization has helped businesses immensely. It downscaled the infrastructure and ecological and energy footprints by almost 60% to 70%. But even today, there are large companies grappling with the nuances of virtualization.

Compared to that, AI is quite a black box. While speaking about it, I don't go too deep because it is confusing and scary. But if you are able to explain its core tenets and inherent benefits to business leaders, they get excited.

To me, Gartner's prediction is optimistic. I would be happy if even 10% of organizations have that maturity scale until 2025. The overall maturity will come when we move away from conglomerates such as Google, Facebook, Amazon or Netflix and shift the focus to Reliance Jio, Philips, BMW and others. Companies are dealing with real-world problems and manufacturing products that are AI-driven. The only way to bring AI maturity is when companies launch projects that are cross-functional, cross-departmental, and can prove the business value to business owners in their language.

With every new and rising technology, there are associated fears. It applies to AI as well. Do you think those fears prevent CIOs from adopting AI at scale? How long can they afford to avoid implementing this technology?

Singh: That is not correct. I have been working with the board of the largest steel manufacturing company globally. Their thought process is mature and progressive. The use of AI or ML comes handy to predict the optimization of the production environment or in predictive maintenance. Our obsession should move from big tech to mature companies using AI in real-world scenarios. We are engaged in some fascinating projects in conventional industries like oil and gas grappling with some deep geological surface problems. All these are examples of mainstream AI usage at scale.

Every tech vendor has a story on AI. Right from ML to deep learning to advanced data processing, AI, or MLOps, everybody is talking about something or the other. There seems to be a lot of clutter. How can we help CIOs and tech leaders declutter this space?

Singh: I do not entirely disagree, but there are fitting examples of AI usage that can help CIOs decide which way to go. Let me narrate an interesting example from the brewing industry. Brewing is a complex process. Capping the bottle while it is moving on a conveyer belt is a minor but important step in the process. When the robots are putting metal caps on the bottles, minuscule particles of glass may break and contaminate the beer, making it dangerous for consumption. A simple computer vision system can spot those particles and remove them. It is a simple yet effective use of AI. It is a project we worked on a couple of years ago. With a small intervention, they are now producing and selling safe beer to their consumers. So, customer experience is improved.

It could be any company; for example, Bajaj Auto in India. Automotive manufacturing is the outcome of many processes coming together. Simple AI-enabled camera systems can augment the power of visualization in finding defects. This is the starting point of creating highly intelligent manufacturing operations and smart factories, which few are interested in. AI is swiftly moving into the manufacturing operations via your computer, which never tires, and is constantly looking at all processes with the same level of intensity, giving you a complete view of your operations. Isn't that digital transformation?

I have deep respect for Andrew Ng, the co-founder and head of Google Brain, and former chief scientist of Baidu. He built great manufacturing-focused AI companies such as Landing AI and Drive.ai solving complex, real-world problems. Andrew always talks about small data that only machines can solve. It is important to understand, quantify and put in small data before running any machine-learning model. There are several benefits across industries, such as manufacturing, oil and gas, energy grid, and solar or wind energy.

AI risk management is a big concern when we deal with issues like ethics, biases and trust. Is it imminent? Do CIOs or CTOs need a proper AI risk management program?

Dr. Anwita Maiti: Sadly, people are not concerned about AI risk management yet. In most organizations, AI ethics are kept at bay. Google fired two of its employees when they spoke about the negligence of ethics. We are already aware of the biases of Meta. EY was recently slapped with a fine of $100 million after the SEC found their staffers cheating on an ethics exam.

An AI risk management model can surely be looked at. Every organization should set up rules and regulations for AI ethics. There has to be transparency, accountability and fairness in the pre-designing stage of AI systems, which should be carried during the designing and deployment stages as well.

It is tough, but CIOs must make an objective attempt at training people who are engaged in developing AI algorithms. The data should be tested on a diverse set of people to remove biases and stereotypes. Besides computer and data scientists or engineers, there is a need to include social scientists while developing AI or ML models to help eliminate issues pertaining to ethics and biases.

Before a new AI tool is introduced to society, there needs to be some pre-launch education. Launching AI tools without forewarning only breeds speculation, confusion and disharmony.

Another crucial point is that both technical and non-technical people associated with the development of AI should work in tandem from the pre-designing stage to mitigate risks that might occur later.

What is your take on the AI platforms or engines of big tech companies? How can CIOs make use of them without falling into the cost trap?

Singh: A lot depends on how and what you borrow from these platforms. You could take your data, train that data on these platforms, and create your own AI system. The objective tools in these AI engines do not have inherent biases unless the data sets are primitive. When a primitive data set is used, the model suddenly starts showing lower accuracy compared to its performance during the training period. This is known as model drift.

I think we should turn this question to "Why are CIOs and CTOs anxious about putting their data in cloud platforms?"

To prevent anxiety, it is important to know the pros and the cons of these robust platforms. With these robust platforms, you can ramp up ability quickly. I have had a wonderful experience with GCP and AWS. If you are equipped with a good team consisting of data architects, algorithm experts, ML engineers, ethics researchers, regulatory experts and social scientists, you can aim for excellent outcomes. The key challenge for oil majors such as ChevronTexaco, ExxonMobil and Shell Global is not ethics. Their challenge is whether they are investing enough on the right platform. You'd be surprised to know that the computational cost on these platforms is vastly different than what it was when they were using plain vanilla systems.

As the ML ecosystem starts expanding, CIOs or CTOs will have to deal with a lot of moving parts. This mandates huge computational and graphics processing unit (GPU) power. The computation resources that are needed for AWS or Google search, when GPUs have to be fired up, are becoming increasingly expensive. It will suddenly start costing a lot of money because the data centers in Europe generate a lot of heat and the electric bills that Amazon or Google will have to pay to their grid suppliers will skyrocket.  A surcharge will be added to the bill, and you will receive an inflated bill. Researchers and scientists will keep adding resources. After all, they are not the ones signing the cheque. The CFOs will trigger the alarm, and your project might come under review.

If CIOs and IT leaders go blindfolded in selecting those platforms and don't look at the fine print, they are setting themselves up for failure. There is no built-in accountability to knowing about these cost implications yet. You must have a well-informed team with active foresight.

Before we close, would you like to share another example of AI's use in the healthcare industry?

Singh: While there are many examples in the healthcare domain, let me pick up a subset that is dentistry. There are approximately 1.6 million practicing dentists across the world. You would be surprised that Rio de Janeiro has the largest pool of dentists. India has around 260,000 dentists, the largest in the Asia region. Setting up a dental clinic is expensive, including the capital locked in the apparatus.

A bunch of AI startups in the U.S., Israel, Netherlands and Germany are beginning to enter this domain. The AI-enabled diagnostic tools can take a 3D panoramic image and help dentists diagnose the issues. Healthcare has many other examples including the early detection of cancer by analyzing tissue samples. Large companies like GE Healthcare have invested tons of money in AI for preventive healthcare. AI is now slowly entering the lifestyle industry.


About the Author

Rahul Neel Mani

Rahul Neel Mani

Founding Director of Grey Head Media and Vice President of Community Engagement and Editorial, ISMG

Neel Mani is responsible for building and nurturing communities in both technology and security domains for various ISMG brands. He has more than 25 years of experience in B2B technology and telecom journalism and has worked in various leadership editorial roles in the past, including incubating and successfully running Grey Head Media for 11 years. Prior to starting Grey Head Media, he worked with 9.9 Media, IDG India and Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cio.inc, you agree to our use of cookies.