Generative AI Business Use Cases: Navigating a New Frontier
It’s been long known that adopting existing AI tools saves time and money by automating repetitive tasks. Since then, the public release of tools like ChatGPT and its underlying technology, generative AI, has opened a new era and a fresh floodgate of possibilities.
“Generative AI and tools like ChatGPT offer a path to reducing and eliminating repetitive, time-consuming — and ultimately soul-sucking — work,” says Joyce Mullen, CEO and president of Insight Enterprises. “This will then free us, and I mean really free us, to focus on work that drives innovation.”
Insight architects quickly started developing use cases for generative AI. By leveraging ChatGPT’s API, our experts created an internal-use instance dubbed InsightGPT.
We’ve since loaded this tool with internal data, and our teams have been busy finding new uses. We generate reports, educate each other on our capabilities, run application tests, inspire each other through our communications and more.
As a writer, I’ve personally used InsightGPT to help me generate blog ideas, identify search engine optimization keywords and organize my messy notes into readable and shareable memos. By streamlining these sometimes tedious tasks, I’ve freed myself to focus on backlog projects and help other teams with their workloads.
Despite its ease-of-use, generative AI isn’t simply solving problems and providing immediate gains out of the box, Mullen explains.
And I couldn’t agree more.
Human involvement and skill are still necessary. She points out that we must augment this tool with our own innovative thinking to find success that drives results.
Integrating generative AI into everyday processes — from marketing, sales and customer service to more technical roles like software development — will have huge benefits, according to Mullen.
“If a business has a strict budget and a large backlog for any type of business improvement, whether it’s for internal IT systems or customer service, the organization can work through that and accomplish their goals a lot faster when they’re using generative AI,” Mullen says.
It’s not just Insight leaders who are excited about generative AI.
Insight recently commissioned The Harris Poll to survey those working full-time as a director or higher at an organization with 1,000+ employees. These business leaders revealed whether they’d adopt generative AI technology and how it might impact their business. The results found:
81% report their organization has already established or is working to implement a policy around using generative AI.
72% plan to adopt generative AI in the next three years to improve employee productivity across the organization.
90% see that a wide range of roles — from data analyst and software developer to financial operations and communications — will be supported by generative AI.
The results are eye-opening, revealing the impact this technology will have on the workforce and enterprise. One critical question remains: Just how does a business adopt and implement this technology effectively (and safely)?
If a business has a strict budget and a large backlog for any type of business improvement, whether it’s for internal IT systems or customer service, the organization can work through that and accomplish their goals a lot faster when they’re using generative AI.
Generative AI became popular in fall 2022 after OpenAI released ChatGPT — which stands for Chat Generative Pre-Trained Transformer. Since then, Google and Microsoft have released generative AI applications, Bard and Bing Chat, respectively.
The common thread is that these tools use generative AI, an artificial intelligence that gathers information, then outputs something newly-created on par with human-created content. These outputs can take the shape of text, images, graphs and just about everything in between. In simple terms, generative AI uses large troves of data to then create unique content.
This contrasts traditional AI tools like machine learning and predictive analytics, where pre-defined text and responses are served as the outputs. Your average website chat tool uses these forms of AI.
Generative AI, on the other hand, provides outputs that are completely unique. These outputs are the result of a Large Language Model (LLM), which is a neural network of unlabeled text trained to understand the definition of the text and phrases.
As an example, ChatGPT takes your input, or instructions, processes and understands what is meant. It then creates an output based on what you request. This is all done in seconds. There are many possible outputs, from the fantastic — like asking ChatGPT to write a short story about a flower that goes to a coffee shop — or the more business-oriented, like having it write a blog post about your business or write code for your application.
The technology is so sophisticated that Microsoft researchers believe it’s showing signs of human reasoning.2 OpenAI hopes that its technology serves as a stepping stone to create what is called artificial general intelligence — meaning AI that is as smart as a human.3
It can be tempting to pursue generative AI full stop. But organizations must be cautious in their adoption to ensure success.
Experts across industries have many privacy questions about generative AI and ChatGPT. What does it mean for this kind of tool to access vast amounts of corporate and personal data?
Potential for misuse is abundant and includes plagiarism, mishandling of private information and intellectual property and — ultimately — fraud.
The concern is so large that the Federal Trade Commission recently issued a warning about malicious use of these tools as well, and Italy even outright banned ChatGPT.1
Then there are the standard business concerns around adopting any new technology. What technology needs to be in place first? Who needs to be involved? And how do you get end users and teammates to use the technology effectively?
While there is no one right way, the team here at Insight has gained a wealth of expertise around effective adoption of this technology.
As early buzz was building for generative AI, many folks at Insight, including Jason Rader, vice president and chief information security officer, were equally amazed and concerned about the technology. Stories of misuse quickly made the news — especially ones where businesses and individuals provided confidential information to the public version of ChatGPT.
“Providing private data to an uncontrolled model is a recipe for disaster, but never giving teammates access isn’t the way to go,” says Rader.
The public version of ChatGPT uses any information provided as an input to help further improve the model. In short, your private data isn’t really private.
Although you can turn that setting off, there’s a better way to limit employee use of the public version. An internal version of ChatGPT gives you control of your data, which is the path Insight took by creating InsightGPT. These versions are connected to a cloud instance by using an API.
“We believe that private versions of this technology, hosted in the Microsoft Azure cloud and supported by our incredible technology teams, are the right solutions for Insight and our clients,” says Suma Nallapati, Insight’s chief information officer. “This will ensure we are always prioritizing the safety and privacy of our teams and clients.”
When using a private instance of the tool, an organization can mitigate privacy and security concerns while opening the door to innovation. You can use the baseline of information that a tool like ChatGPT is built from — which is a vast trove of data from the internet. Then, you augment with your private corporate data, which stays safe within the internal systems that you control.
But while many risks are negated, simply plugging the API into your system and calling it a day isn’t enough.
“People might still use generative AI at home with a private account, so we created something similar to an old school social media policy,” Rader says. “We’ve now been able to experiment safely, starting small and growing from there. Our teams across the business are finding many new ways to save time, whether it’s from sending communications between teammates or generating reports.”
With your use policy and adoption strategy outlined, it’s now time to assess your IT ecosystem and whether or not it can support generative AI.
“You’ll have to clean up your backyard. Get your data prepared by evaluating whether your data estate can perform to support your generative AI model,” explains Dan Kronstal, principal architect of solutions at Insight Canada.
He likens these steps to getting your fundamentals in order, which involves cloud maturity where your infrastructure, security, networks and governance are well-established. You should also have platform services shifted to the cloud — essentially, your IT should be “cloud native.”
When you’re running and managing an effective infrastructure environment, it will be easy to get value from generative AI, he adds.
With your house tidied, it’s time for adoption and employee use. A phased rollout across the organization is best. Remain mindful of what data the tool has access to and which end users are using the tool.
Kronstal notes that when you integrate a tool like ChatGPT into Microsoft Azure, you can have different security groups and retain existing data guardrails — simplifying your data management in this context.
“As long as your security policies maintain the right level of access, you can support innovation with this tool,” he says.
Insight recently released InsightGPT for use across the entire business. To start using the tool, we only have to open up Microsoft Teams and there it is — ready to give helpful outputs based on our detailed inputs.
Excitement around this technology is quickly turning into tangible results at Insight.
As The Harris Poll revealed, there are several job roles that can benefit from using generative AI. Many leaders identified productivity boosts across the organization as a top reason for adoption in the next three years.
“Generative AI comprehends, it understands the language I’m speaking, making it more capable of doing a wide variety of things,” Rader says.
There are several major use cases for generative AI — increasing an employee’s productivity, improving customer service, assisting with research and trend analysis, and automating software development and testing.
Code reviews and testing can be done at hyper speed when a skilled employee uses a generative AI model trained in organizational practices. Additionally, training staff can be done easily by loading a generative AI tool with corporate data pertaining to processes. Staff can use the tool as a reliable resource to find answers to customer questions and learn about organizational history.
Insight is actively working with many organizations across different industries to develop practical generative AI uses and solutions. Yet, it’s no silver bullet for existing problems, and you’ll have to be ready and remain flexible.
Insight CEO Mullen compares generative AI to a junior associate just out of college.
“We have to treat generative AI tools like they’re junior associates. They need coaching, training, oversight and intervention. If everyone, from developers and marketers to HR professionals and lawyers, thinks about generative AI tools this way, we can increase our productivity and do more with less.”
We have to treat generative AI tools like they’re junior associates. They need coaching, training, oversight and intervention. If everyone, from developers and marketers to HR professionals and lawyers, thinks about generative AI tools this way, we can increase our productivity and do more with less.
Generative AI use without the right guardrails in place can have negative consequences. The Harris Poll reports that many business leaders are concerned around quality control (51%), safety and security (49%) and ethics (29%).
We’re all well familiar with the many cautionary tales science fiction told us about with AI, whether it’s Kurt Vonnegut’s “Player Piano” or Ridley Scott’s “Blade Runner.”
While the machines aren’t making humanity useless or conquering the world, organizations must be cautious about real concerns around biased results, privacy exposure and more. After all, with great power comes great responsibility.
Be cautious when adopting and integrating generative AI in your workplace. This technology can be powerful, but it can also be harmful if not used correctly. Have a well-planned approach to integrate it into your IT system to prevent harm and misuse and optimize its benefits.
An AI CoE is a smart way to develop best practices and policies around execution. This will be instrumental in any generative AI project. Following this model will help your business remove team silos and accelerate the impact of AI on your business.
Your CoE should include expertise across your business, including:
Data science and DevOps to establish requirements and develop models.
FinOps to maintain the project’s budget.
Platform engineering and DevX to ensure your cloud and infrastructure platforms work alongside the generative AI.
Legal to keep your project compliant with industry regulations and established laws.
Business stakeholders who will help define use cases.
Finding the right mix of input into these projects will give your generative AI solution a boost to meet business needs for today and tomorrow.
Getting input from various voices will be equally beneficial for using generative AI responsibly. You’ll want to be aware of any potential biases for any given use case. Entering biases and stereotypes can be done easily, so guarding against this is key.
“With these models, there’s always going to be some bias,” says Matt Jackson, Insight’s global chief technology officer and solutions portfolio senior vice president.
“Being aware of those biases and making sure to avoid potential dangers around implementing generative AI and automating processes will be vital to your success.”
The responsibility rests on businesses to avoid mistakes with generative AI.
Jackson notes that large language models could include good and bad examples of diversity, inclusion, race and gender discrimination, and more. So, your business will ultimately need to be cautious with its use cases and test out potential issues.
“Be ready to navigate those pitfalls,” Jackson explains. “And make sure that a human is always involved in any decision-making that an AI participates in.”
Ensure effective generative AI adoption. Insight helps you move your generative AI further, faster.
Learn more
Jesse A. Millard is a Phoenix-based writer at Insight. Previously, he was a writer at several Phoenix-based publications where he primarily covered technology.
1 Perez, S. (2023, April 18). FTC Warns That AI Technology Like ChatGPT Could 'Turbocharge' Fraud. TechCrunch. 2 Cade, M. (2023, May 16). Microsoft Says New A.I. Shows Signs of Human Reasoning. The New York Times.3 OpenAI. (2023) Pioneering Research on the Path to AGI.