Spotlight: Gen AI Risks, AR/VR's Role in Warehouses and More
CIO.inc Editors Discuss Ethical Dilemma in AI Use, AR/VR Impact, LLM Transformation Suparna Goswami (gsuparna) • November 21, 2023
In the latest episode of Spotlight, editors at ISMG's CIO.inc review this month's most important technology conversations with CIOs and tech leaders.
See Also: The State of Enterprise Mobile App Security 2023: Results Analysis
The editors - Suparna Goswami, associate editor; Shipra Malhotra, managing editor; Brian Pereira, senior director - editorial; and Rahul Neel Mani, vice president - editorial, discussed:
- The unknown risks of generative AI;
- The impact of AR and VR on warehouses;
- How HARMAN Digital Transformation Solutions built its own LLM for training.
Spotlight is a monthly video series where editors highlight topics that matter to the CIO community. Catch up on our previous episode, where editors discussed generative AI use cases in the healthcare and automotive industries, and how businesses are developing LLMs.
Transcript
This transcript has been edited and refined for clarity.
Suparna Goswami: Hello there, I'm Suparna Goswami. I'm associate editor with Information Security Media Group. I welcome you all to another episode of Spotlight, where our editors bring to you the latest from the world of technology. The editors joining me today are Shipra Malhotra, managing editor; Rahul Neil Mani, vice president, community engagement and editorial; and Brian Pereira, senior director, editorial. Shipra, Rahul, and Brian, welcome, and hope that Diwali went well.
Shipra Malhotra: Thank you. Yes, it was good.
Brian Pereira: It was fantastic, Suparna.
Rahul Neel Mani: Hi, Suparna. It was a great extended weekend for all of us.
Goswami: Yes. And we are still in that festive mood. But back to the discussion today. Rahul, let me start with you. You conducted a wonderful interview with Richard Foster-Fletcher, who is from Morality and Knowledge in AI. There was this fantastic point that made me watch the entire interview. You both discussed on how AI is unable to learn from social interactions, and how it has an impact overall. Can you please explain it for our audience, Rahul?
Neel Mani: Indeed I will. But I'll take two steps back before I come to your point, Suparna, as always. So, on one hand, AI is being touted as one of the most revolutionary technologies. It's been equated with the invention of electricity. But on the other hand, it is also said to be the most lethal one, being even equated as a threat for human existence. Keeping all of these developments in mind, first thing first, I would like to announce that ISMG has launched a series of initiatives, including a dedicated website on AI that is called AIToday.io. It is a very significant shift in our content strategy, which also underlines the fact that AI is here to stay and how. Now, all of 2023, we saw significant progress in the field of AI, specifically in generative AI with the launch of ChatGPT, which was launched in December last year. And with these advancements in various forms and shapes of AI, specifically the generative AI tools, suddenly, the once very familiar AI ground has shifted goal posts. It has shifted tremendously, opening new areas of ethical concerns and ethical inquiries on AI. From questions of how generative AI may exacerbate the digital divide to the potential for plagiarism, distribution of harmful content, misinformation, deepfakes, worker displacement, organizations are trying to wrestle with the new ethical issues posed by a wide-scale adoption of this technology. Ignoring or downplaying ethical issues associated with this technology comes at a cost - and on various fronts, be it reputational, human damage, regulatory penalties, regulatory compliances, financial damages, employee dissatisfaction, and many others. In the past few weeks, we have seen great developments. They might be just postures, but we have seen great developments. For example, the AI Safety Summit in London and Bletchley Declaration that happened to which 28 countries agreed, including the European Union, that they will responsibly use AI and create a greater scientific collaboration with each other to develop more ethical principles. Even in the U.S., late last month, we saw an Executive Order being issued by the Biden administration, which promotes the safe, secure and trustworthy use and development of AI. Even before that, there were 193 countries that adopted the first-ever global agreement on ethics of artificial intelligence. Now, this was UN promoted. All of these things appear to be very prominent and promising. However, they also appear to be very cosmetic. And that is what Richard Foster-Fletcher, who's the executive chair for Morality and Knowledge in AI, spoke about in his conversation with me, where he very categorically says that platforms such as ChatGPT are predominantly trained on English language, for example. This language largely originates from the U.S. or the U.K. or some other developed English-speaking male-dominated workforce. And he also mentions that it inherently reflects a very Western-centric perspective, which raises concerns about its appropriateness and relevance for international users. And he, at the same time, also dismissed the recent pledges made by the big tech corporations. Now, that is why I say that there's a visible inconsistency between what the public commitments of these major technology companies are and what they are doing. For example, all of these companies, including Meta, Google, Amazon, Twitter, Microsoft, have all vigorously advocated for ethical AI in their development of products and applications, even though they have curtailed their dedicated AI ethics team. I can give you a few examples like Microsoft terminated its AI in ethics and society team, despite promoting all of these principles. Even the U.K. government, which hosted the AI Safety Summit, recently disbanded its advisory board on AI ethics, and this was part of their center for data ethics and innovation. Now, all of these dichotomies that we are living with have to be dealt with very carefully. And therefore, it is imperative for organizations to think of developing trustworthy and ethical principles of every emerging technology, specifically AI.
Goswami: Sure. And I remember that he also mentioned that in order to make these big tech companies responsible, it is important that they hold a position in the UN because so many of them have users globally, such as Google, you mentioned.
Neel Mani: But it is true. And we all know our dependency on all large big tech companies whether it is Google, Meta, Amazon or Microsoft. And the kind of data that we have already voluntarily given to these organizations on the basis of which they are training their LLMs. And they are capable of injecting biases. And it is therefore important for nation-states and various global organizations to keep a check. I'm sure organizations worldwide are very carefully treading this path. And therefore, we are seeing regulations and laws being developed. Starting with Europe, which has very stringent laws on AI's adoption and development - both, going to the Americas, and some APAC countries. We heard our minister talking about coming up with AI regulations and laws within the next six to eight months. And all of these developments are part of the process where everyone feels that ethics and responsible AI is going to be a big challenge for these nations.
Goswami: Sure. And before I move to Brian, one last question for you, Rahul, quickly. He mentioned about the known risks, such as misinformation, but he also spoke about the unknown risk, which he would like businesses to take into account as they embark on this AI journey. So quickly, can you mention these unknown risks for our listeners here?
Neel Mani: See if I talk in the larger context of AI deployment or any technology for that matter, AI in specific, these known risks are the ones that are taken into account while the development process is on. However, the unknown risks come into play when the malicious actors start harnessing the power of technology, which we never account for. Who could imagine that artificial intelligence could be used for developing weapons and waging war with autonomous weapons. I'm not talking about weapons in the business domain, but I'm talking about weaponizing of technology in the business domain. Now those are the risks that are generally not accounted for because our thought process does not take us that far. And therefore, this myopic approach has to change. And that is what Foster-Fletcher was emphasizing on. There are multiple such unknown risks in this domain, I can go to any extent. However, whatever I have said should be seen in the right light of this conversation.
Goswami: Great, Rahul. Thank you for sharing your perspective. Moving on, Brian, always enjoy reading your articles and stories. You presented an interesting case study on how HARMAN DTS built an LLM for training. So when adopting LLMs, what are some of the options businesses have? And can you let us know what are the pros and cons of each of these?
Pereira: Sure, Suparna. So today, businesses are under pressure to come out with their own LLMs and to innovate with artificial intelligence and infuse AI into their products. And so, they have two paths to achieve this. One is they can either build a public LLM and train it using some dataset, or they can take the long winding route and create a private LLM. Now, there are pros and cons for each approach. With public LLMs of course, there are pros that it's faster time to market and it can be built at minimal costs and minimal resources. However, on the con side, it's trained on public data. And it may not be able to give you specific responses in line with your business because it's just using data that's available on the internet. Now with private LLM, I will start with the cons. It involves investment in resources. It takes time to train the model, and it requires high-quality data. And on the pros side, if the quality of data is good and if it is trained well, it can give very specific and very accurate output with minimal hallucination. So, those are the pros and cons of both.
Goswami: So Brian, can you share HARMAN's approach to developing the LLM called HealthGPT? And any particular reason they chose healthcare?
Pereira: My story was titled "Private LLM Development: HARMAN DTS' Journey in Healthcare." And you will find the story both on the CIO.inc website and our new AIToday website. Now for the story, I spoke with Dr. Jai Ganesh, who's the chief product officer, Digital Transformation Solutions, at HARMAN in Bangalore. I asked him why healthcare first. He says that this is one of the most critical industries that kind of requires solutions. And although they have six verticals, they chose to go with healthcare first and chose clinical research data first. And they chose to work on breast cancer first, which is a very important area and requires a lot of attention. So the approach that they took is they opted to build a private LLM. And they took a very unique approach to do this in the shortest time. They took an open source model called Falcon 7B, which is developed in the UAE, and they trained it on publicly available clinical trial datasets. There are sites such as ClinicalTrials.gov that have highly accurate datasets out there for the public use. But what the team did is that they used a sample dataset comprising of 100 to 200 queries. And then this was subsequently expanded to produce 250,000 training records using various techniques. And then the team scaled down from 250,000 to a narrow dataset of 50,000 records, specifically for breast cancer. And then it validated and fine-tuned this dataset on Falcon 7B. I mentioned that there's a tremendous investment. So the DTS team have a mix of data analysts, data engineers, business experts, visualization experts for the UI, even doctors, because the doctors need to understand this data. So tremendous amount of investment, and they faced certain challenges since they were doing this for the first time. For one, healthcare is a highly regulated industry, so they had to ensure no PII data was included in the datasets. Or they have to remove the PII data. They also faced challenges such as with these models, you have an issue with accuracy. And they also tend to hallucinate; they tend to make up stuff. So, this team of experts conducted hundreds of experiments on the model. And they continually refined it until they achieved 95% accuracy in output. And in doing so, they even developed their own testing frameworks and their own testing tools. So a lot of innovation also happened at the same time. Today, they call it HealthGPT. And it's capable of addressing complex queries in three clinical trial areas - breast cancer, immune diseases and heart diseases. HARMAN next wants to use its framework and testing tools to develop and create LLMs for other industries, such as manufacturing, retail, hospitality, communications and software. So this was a tremendous innovation coming out of Bengaluru, India. And it was made for the U.S. market where HARMAN has its customers, for the U.K., and of course for India. So a proud achievement for Indian innovation and Indian teams as well.
Goswami: Indeed, a proud achievement. Thank you, Brian. Thank you so much for sharing that. And Shipra, your interview with co-founder of GridRaster touched upon the impact of AR and VR on warehouses. So can you explain the role of AR and VR?
Malhotra: So moving away from artificial intelligence. Before I get into the role of AR and VR in warehouse automation, I just want to give an overview of the need or the driver for automating the warehouses. So what we're seeing is that the warehouses are shifting from reactive to predictive operations, leveraging data to anticipate the customer needs, which leads to a more sustainable and efficient system, leading to significantly reduced returns and higher accuracy in fulfilling the orders. And this is driving the need for automation. In addition, labor shortages and the need for cost-efficiency are also a big driver. So industries are vying for talent and automated systems in the warehouses that can help bridge the gap by taking over the repetitive and the mundane tasks. And this not only elevates the role of human workers but also ensures that the warehouses can maintain high-productivity levels, despite the workforce fluctuations. Now coming to AR and VR, specifically, and how they're enabling warehouse automation, it is a result of convergence of various technologies and it's not just AR and VR. And these technologies include artificial intelligence, AR, VR, and cloud, among the others. Specifically relating to AR and VR, as warehouses evolve, these technologies are not merely auxiliary, and they are becoming the central interface for human-machine interaction, which is reshaping the traditional operations in the warehouse. AR and VR are significantly enhancing the training protocols within companies like Walmart and all the other big retail outlets, which have retail companies that have very large and vast warehouses that they depend on. So Dijam Panigrahi, co-founder of GridRaster, explained that VR, in particular, is being utilized to train employees in a synthetic yet realistic environment that closely mirrors the actual warehouse conditions. And this method of training is proven to be very effective, with retention rates soaring to almost 90% compared to the 10% retention rate associated with the traditional video-based training. So such immersive training is instrumental in preparing workers for the warehouse floor before they step on the warehouse floor. And it is equipping them with the necessary skills in a fraction of the time and with greater precision. Coming to augmented reality, it takes the lead in real-time operational guidance, directing workers through their tasks with visual cues and overlays that streamline the whole picking and placing process. And this guidance ensures that tasks are completed accurately, reducing the likelihood of errors that could lead to returns or wasted resources thereby enhancing efficiency. Now if we look at retailers, or companies such as Walmart, Amazon, for them, the scale at which they operate and need to process orders, when we compile all the returns and the wasted resources, it means a huge saving for them, which can run into millions of dollars. Panigrahi also spoke about the role of AR and VR in robot training. Now this is very interesting because by creating a virtual replica of the warehouse environment, companies can train the robots in a risk-free setting, instead of letting the robot onto the warehouse floor right away. And this simulation allows for accelerated learning and debugging of the robotic operations before the robots are introduced to the actual warehouse floor. And the data generated from these virtual environments is invaluable, enabling the fine-tuning of robots for optimized performance. So in all, he spoke about how AR and VR are integral to the future-based warehouse operations. These technologies may not be currently used at the scale that we envisage. But in the future operations, it will be very integral. And it will be going beyond the current uses to enable a predictive, efficient and error-minimized logistics ecosystems. But like I mentioned earlier, these technologies will not be operating in isolation. And they will be converging ultimately with other technologies such as AI to redefine the warehouse experience and to set new standards.
Goswami: As of now, what are the challenges when it comes to implementing AR and VR for warehouses? And how are they addressing these challenges?
Malhotra: So the big hurdle in AR/VR adoption is the bulkiness of the whole headset. And that is where most of the time, things come to a standstill. And the physical design of the AR and VR hardware, however, is undergoing a significant change. The bulkiness and the discomfort of headsets, which have been a barrier to mainstream and widespread use, are now slowly getting replaced by more comfortable and less bulky headset. And one of the reasons for that is that a lot of processing that was happening on the headset is now happening on cloud. So one of the reasons for the bulkiness and the heaviness of the headsets is because they also need to have the processes, the processor, as well as other devices, the storage, on the headset itself. Now, if a lot of processing happens on cloud, then it does away with the need for putting all that in the headset. So this is something that is currently being worked upon. And going forward, we are going to see more user-friendly devices. However, this also means that there needs to be reduced latency when it comes to processing data on cloud. And then, moving it between cloud and the headset.
Goswami: Great, Shipra. Brian, you want to share something here?
Pereira: Just to add to Shipra's comments about the headset. I've been tracking the headset development for some time. And you may have read about Meta and Ray-Ban, they have a collaboration to create a very sleek headset, which looks like a pair of glasses - Ray-Ban glasses - that hardly looks like a headset. And I think that's the way headsets will go next year. There are others that are looking at this technology like Microsoft HoloLens. And Apple will launch Vision Pro in January or February. It's a bulky headset because it has hundreds of sensors and cameras in the headset itself, and a lot of processing going on in the headset, but like Shipra mentioned, if some of this processing can be offloaded to the cloud. But then there's also the issue of latency, so a bit of edge processing also needs to happen at the headset itself. So that's a challenge to get the form factor down on the headset, but then you don't want everyone walking on the street using bulky, extremely uncomfortable headsets and for a manufacturing or industrial environment, a sleeker headset is definitely a better choice.
Goswami: Thanks for the information.
Neel Mani: Sorry, totally unrelated. Headset is a very different gadget or equipment to use. But what we just witnessed a week ago, the Ai Pin, which is launched by Humane. This company is formed by former Apple executives and engineers. Now, this is the transformation that we are looking at, which is non-invasive technology which might make the handset, not the headset, but the handset redundant, very fast. And this is a classic case where OpenAI's ChatGPT is enabling the whole technology. Just wanted to share this information.
Goswami: Exciting times ahead. Thank you. Thank you all for the information and sharing bit about your interviews as well as articles. But time for our questions today and easy one this time. Last month, CIO.inc completed a year. Congratulations to you all. So what have been your learnings or takeaways in the past year in terms of your interactions with CIOs, it could be your exposure to some of the technologies, would love to hear your thoughts?
Neel Mani: Congratulations to all the team members of CIO.inc for completing one year. And it was very promising from the perspective of generating good, exclusive, global content. That was a task and a challenge. And I think, we came out with flying colors on that. So congratulations to all my team members - on screen and off screen both. As far as our conversations with the CIOs are concerned, I will keep the shiny objects separate from the realities. The reality is that these shiny objects need to creep into their technology infrastructure. However, they are treading the path with utmost caution, including the adoption of the new shiny object, which is generative AI. There is a buzz; however, there is a unanimity of thought that while this technology goes up the maturity path, they are all observing with caution as to what aspect of this technology can be more dangerous for their organization. There is a dichotomy between the cybersecurity folks and the technology folks who are always at war. Not always, let me correct myself, but most often they are at war, how soon they can launch a product, which is based on new technologies, vis a vie, the cybersecurity guys playing the devil's advocate that no, this technology needs to be tested more before it has productized. So that aside, I think one aspect that remained constant and dominant is how to create a universal digital footprint for their organizations. How pervasive can digital be across all functions, be it manufacturing, supply chain, services, front ending, or customer experience? So that probably to me was a dominating factor in the last one year.
Goswami: Great. Shipra?
Malhotra: So two things. One was, why it's important to catch up with the trends early on. Of course, there was a big inflection point when OpenAI launched ChatGPT. And while we were reporting and talking about ChatGPT in our articles, we realized that it is much beyond ChatGPT. And we need to start talking about generative AI and LLMs, the broader story and the broader aspect of that and not just confine ourselves only to ChatGPT. And that's when we also started tracking generative AI and LLMs, as well as their use cases in different industries. So we have done stories around generative AI in healthcare, media, entertainment industry and manufacturing. So all in all, I think we have already looking ahead in terms of what next of that technology, and hopefully we will continue to do that at a faster pace. Secondly, what I've seen is that a lot of CIOs and technology leaders are expecting to hear stories about what CIOs globally are doing. They want use cases; they want to read the case studies and not just what the technology is about. For instance, a CIO in the manufacturing sector wants to know about how his peers, let's say, sitting in Europe, are planning to implement a particular technology or how they have implemented the technology? What were the challenges? What are the different use cases that they have already implemented, and how it is benefiting them? So they don't just want to hear about stories in the air, but they want to read about stories on ground.
Goswami: Fantastic Shipra. And Brian, would love to hear your views here.
Pereira: Firstly, my congratulations to the CIO.inc editorial team and the associated production teams at ISMG, who work behind the scenes, very hard - they do some amazing work here. And they've done some amazing work in the first year. Speaking to CIOs during the year, they generally said that tech is evolving at a very fast pace. It is a challenge for them to keep up with all this technology. And it also means explaining all this new technology like generative AI to business leaders in easy-to-understand terms, and then preparing the organizations to adopt this technology. Personally, for me, it's also a challenge to keep pace. But then we have been learning about technology right through the years. And so for me, it's continual learning - reading about reports about AI every day, watching lots of videos, and now I'm even pursuing a course on AI by Andrew Ng to keep myself up to date with AI. But learning something new every day is truly exciting.
Goswami: Fantastic, Brian. And I'm sure that's a challenge for all of us to keep up with the pace at which the technology is evolving. Thank you, Rahul, Shipra, and Brian for this. Until next time, stay tuned to CIO.inc and AIToday.io websites to read the latest on technology and AI.
Malhotra: Thank you, Suparna.
Neel Mani: Thank you so much.
Pereira: Thank you.