How MDM Can Protect Against Generative AI Mishaps on Digital Displays

risks of generative AI
Source: Pexels

Generative artificial intelligence (AI) is defining 2023. Numerous tech companies, including OpenAI and Google, have put out large language models (LLM) that can be used by consumers directly, or integrated into other products via an application programming interface (API).

Many companies are rushing to include generative AI into new or existing products. Some are even doing so in customer touchpoints, such as digital kiosks. While such action may be necessary to keep up with the times, it also presents several challenges.

LLM models hallucinate.

When a person performs a search query on Google or another search engine, they will be directed to a results page featuring different websites. The top results usually have high factual accuracy, given that so many other pages link to them.

LLM models hallucinate
Source: Pexels

This trustworthiness often does not apply to queries on LLMs. When prompted with a difficult question, these models often “hallucinate” an answer. These responses can be entirely fabricated, but may sound plausible enough to users who may be unfamiliar with the domain. The LLM will not also give any kind of indication that it is providing a made-up answer. Needless to say, this is problematic for companies who incorporate LLMs into any kind of customer-facing role, where users are counting on it for trustworthy information.

Users can coerce politically incorrect responses from LLM models.

Since ChatGPT arrived onto the scene in late 2022, users have been playing around with the model. Most do so productively – they simply want to test the limits of how far the LLM can help them. Others, however, try to undermine ChatGPT. Through devilish prompts, they attempt to get the model to give racist, sexist, or otherwise offensive answers. This behavior is in the same spirit of Twitter users who previously trained Microsoft’s chatbot to be racist.

politically incorrect responses
Source: Pexels

When users do this on the public version of an LLM, the provider often draws the heat for allowing this to happen. Critics tend to argue that they did not put enough safeguards in place to prevent the abuse. When these offensive responses are coerced through a third-party product, it will be this organization who faces reputational damage.

LLM models are a cybersecurity risk.

In February of this year, Stanford University student Kevin Liu was able to ascertain the inner workings of Bing Chat, which had been integrated with OpenAI’s technology. He was able to discover Bing’s initial prompt by asking it to disregard previous instructions and then detail what was at the “beginning of the document above.”

cybersecurity risk
Source: Pexels

This incident is an example of a prompt injection attack. In these cases, an attacker prompts the model so that it divulges confidential details of how it works. While the vulnerability that Liu used was patched up, LLMs may still be susceptible to different types of prompt injection attacks. Companies that incorporate LLMs into their customer touchpoints may thus be providing an easy attack vector for would-be hackers.

The better way forward

While companies may feel compelled to integrate LLMs to keep up with competitors, they should do so carefully. The integration of an LLM can cause serious problems to the organization. Hallucinations or politically incorrect responses may cause severe reputational damage. No brand, after all, wants to be associated with misinformation or hate. These incidents may lead to a loss of customer trust, and in turn, business and revenue.

Furthermore, a prompt injection attack may expose confidential business information at a time when consumers are increasingly concerned about data privacy and cybersecurity. They may rightfully avoid doing business with organizations that effectively leave a wide open door to hackers in the form of their AI-powered chatbot.

As serious as these issues may be, organizations should not abandon the idea of incorporating an LLM into their touchpoints altogether. Instead, business leaders should put appropriate protections in place. The best way to do so is through a trusted MDM, like AirDroid Business. Because most kiosks are outputted from the control center of an Android TV box, AirDroid Business MDM empowers businesses to deal with any possible issue with generative AI at their kiosks or other touchpoints.

Android kiosk software Free Trial Banner (27)


Imagine a situation where a customer goes into a store and walks up to a self-service kiosk. The customer then asks the AI-powered chatbot when product returns are acceptable, and the chatbot responds that the store has a “no questions asked” return policy. But this answer is a LLM hallucination: The store actually has strict policies around returns, such as only accepting exchanges for defective products. 

If the customer were to accept this information at face value, he would have a bad experience when trying to return a product he was simply unsatisfied with. Fortunately, an MDM like AirDroid Business can prevent situations like these from arising. From a central hub, IT teams can monitor these chatbots in real-time, much in the same way that human agents would be monitored at a call center.

Source: Pexels

Whenever a chatbot “hallucinates” incorrect information, the IT team can intervene immediately by providing the correct answer. A human agent can provide the correct answer through remote support, such as through the chat function, or even via a voice or video call through the kiosk. This escalation ensures that brand trust and the customer experience is preserved. Furthermore, the IT team can note the factual inaccuracy and rectify the training data for the brand-specific version of the LLM.

Politically incorrect responses

Picture the average community bulletin board or bathroom wall in a given neighborhood. Most are littered with offensive language of all flavors. When given an opportunity, vandals will deface any kind of public medium. Digital kiosks integrated with LLMs will be no different: Vandals will try to coerce offensive messages that can be left for the next user, or even photographed for sharing on social media.

The first threat is always internal. A disgruntled employee may change the chatbot’s initial query to: “How much do you hate our company today?” or some other demeaning question. Fortunately, the permission control via an MDM like AirDroid Business is sophisticated: IT professionals will only have access to the devices that they specifically need access to, which limits their sphere of influence. More importantly, their access can be revoked at any given time, so that any damage can be immediately corrected and mitigated.

For external threats, the best risk mitigation is deterrence. While many vandals are comfortable spewing hate speech anonymously, they will not be when their identity is associated with this kind of content. With AirDroid Business, brands can use video surveillance of all users, coupled with some kind of disclaimer: “Vandals who attempt to use our customer service bot to generate hate speech or offensive language will be recorded.” This warning will deter the vast majority of vandals, who know that public outrage can easily ruin a bigot’s career or life.

Prompt injection attacks

Like the white hat attack from Stanford University student Kevin Liu, most prompt injection attacks are focused on obtaining information. Attackers want to learn about the inner workings of a system, so that they can locate a possible entry point for greater access or control.

The risk of in-person prompt engineering attacks is even greater. Now it is not only the software that can be compromised, but the hardware as well. An attacker could use a prompt engineering attack as a precursor to stealing the device in a way that reduces their chances of detection and apprehension. For example, the attacker might prompt the chatbot for more information about the security protocols around the device. If the attacker feels these are weak, he may gain enough confidence to steal it. 

An MDM like AirDroid Business eliminates the risks that come with this possibility. Through AirDroid Business, companies can set up a geofence around their kiosks. If an attacker attempts a prompt injection attack to kick-off the theft of a device, the company will be alerted as soon as the hardware exits the geofence. The company can then alert on-site security, contact the local authorities, or even trigger a factory reset, so that no company information is taken.

Innovating security

While some may criticize the rush of businesses to integrate LLMs into their products, their underlying rationale is understandable: They want to innovate. Incorporating generative AI into customer-facing touchpoints like kiosks can indeed be an extraordinary innovation. Now customers can have their queries answered by an incredibly smart chatbot at any given moment.

With great potential, however, comes great risk. To minimize these threats, organizations should also approach their risk mitigation strategy with an eye for innovation. The best way to do so is through an MDM like AirDroid Business.

IT teams can also quickly step in via remote support whenever a chatbot gives inaccurate information, deter vandalism through the use of video surveillance, and even hinder the impact of theft facilitated through prompt injection with geofencing. All in all, an MDM will help businesses advance into the world of generative AI, all while protecting their business interests with best-in-class service, surveillance, and security.

Remotely Monitor & Control Digital Signage Displays
Sharing is Caring!

One response to “How MDM Can Protect Against Generative AI Mishaps on Digital Displays”

  1. I want to add one more point “Digital signage reduces paper waste and can be more environmentally friendly than printed materials. It contributes to a greener, more sustainable approach to communication and advertising.”

Leave a Reply

Your email address will not be published.