Where We Are
and Where We’re
Going With
Generative AI
Insights From a Leading Technologist
Software development is present in practically every industry. Most development hours are spent figuring out which type of code to write and which snippets to generate, which can be a time-consuming task.
Generative AI, on the other hand, is useful in generating suggestions for code writing. However, it goes beyond that, extending to learning other technical and soft skills, such as account-based marketing or coaching an HR team to communicate better.
Generative AI can generate training plans and programs for learning any skill, making it a helpful tool for both software developers and anyone looking to gain a new skill.
Prompt engineering will be one of the most important skillsets that we see emerge in the next few years. And it’s really about getting the most out of your large language models. If you don’t know how to speak to Large Language Models (LLMs), they won’t know how to answer.
Prompt engineering will be one of the most important skillsets that we see emerge in the next few years. If you don’t know how to speak to large language models, they won’t know how to answer.
You want to coach your employees or your customers on the best way to interact with these models, because you’re ultimately building a data set for them to intake. When it comes to machine learning, the key concept is garbage in, garbage out. If your data set isn’t great, the outcome won’t be great either.
Prompt engineering is the act of providing instructions to the model with tailored inputs, which includes stringing several instructions together to reach a desired output. Or, providing past outputs as inputs with some edits. The output from generative AI is only as good as the instructions and data it’s given. Organizations will need to focus on growing this skill across all generative AI end users.
My advice to business leaders who are adopting generative AI is to proceed with care, but don’t hold back from allowing your engineering teams to experiment.
So really think about your prompt engineering practice as a data quality practice.
Insight’s approach to adopting generative AI began with a healthy conservatism and a focus on responsible AI practices first. This involved ensuring that our environment was secure and that employees were trained on how to use generative AI technology.
In fact, before deploying our own secure InsightGPT model, we took steps to educate our employees on prompt engineering.
This skillset is critical to interacting with large language models effectively and efficiently while ensuring that it’s cost-effective. We have prompt engineering courses that we’re rolling out internally. This allows our employees to query these systems a little bit more easily and manage expectations around what they’re going to get out of these AI systems.
We coach our clients to look at generative AI in three areas:
1. You need to develop a community of practice. This community is comprised of AI leaders from every area of the business who can devise acceptable use policies and identify high-value use cases.
2. Isolate a team (like a two-pizza team) that can develop these use cases in a secure and controlled environment.
3. You want an area where you (or your developers) can experiment with the technology and then work through use case backlogs.
At Insight, we’ve been using generative AI technology for a while, equipping our employees with the best tools via Azure and OpenAI’s ChatGPT. We’ve been able to deploy the secure InsightGPT tool and monitor its prompt and response history while tracking costs. We can see what’s working and what isn’t.
It’s really important for organizations to get their data estate aligned to high-value use cases. If you have knowledge bases that your employees or customers frequently pull from to do your operational business, that may be a great place to start.
It’s also important for business leaders to understand the lifecycle of generative AI. We’re used to a traditional machine learning lifecycle.
This begins with understanding what the business problem is, doing some data engineering, some feature engineering, some model training, some model deployment and then ultimately, business integration. That cycle is mostly the same for generative AI, but a couple of things must shift and change.
Learn from Insight’s Chief Security Officer and AI experts as they explore how to maximize the potential of AI for customer-facing use cases — and the steps organizations can take to minimize risk. Watch now
Now we’re working with these really large language models that are trained elsewhere. Maybe we’re borrowing from them, we’re using them in our environments, and we need to pay special attention to where our data is going. Is it going back to those systems for training? Or are we securing all our data in one place in our own environment?
AI products have historically involved only a few players in the project — project management, AI engineers and, perhaps, one or two data scientists. With such a team in place, we could set up a small Proof of Concept (POC) converted to a Minimum Viable Product (MVP) and have a great pilot going.
However, this has now changed. AI is being used by people across all departments in an organization, meaning that we have to scale.
This scaling process requires coordination and cooperation between the PMO office, product strategy team, cybersecurity team, AI engineering teams, and most importantly, the data estate. The data estate must align with the strategy for AI because it is the fuel that keeps the engine going.
The challenge right now is that there’s not an out-of-the-box way to really solve bias. When we evaluate a data estate, we evaluate data quality, data quantity, the trustworthiness of data and the amount of bias that may sit in that data set. You want to make sure the data isn’t going to replicate bad habits.
My advice to business leaders who are adopting generative AI is to proceed with care, but don’t hold back from allowing your engineering teams to experiment. We really do need a place to discover new use cases. We can’t always dream the best ones up without having a platform or having a playground to really experiment.
There have been many questions regarding whether we’re approaching hyper-automation and what the future looks like in three to five years. Will our large language models tell our machines how to write new applications, thus automating all of our business processes?
It’s a daunting challenge to expect from technology, but all the pieces are there to make it work. We have large language models, excellent DevOps and platform engineering capabilities. Nevertheless, at present, universal trust and reliability in these models while they’re in production are lacking.
As technologists, it’s not our job to persuade or convince users to trust AI. Instead, we should work to continually improve the technology, making it more reliable and trustworthy over time.
The work itself should speak for itself such that AI leaders can see with their own eyes that the technology is reliable. Let’s continue improving AI such that we can embrace its capabilities more fully in the future.
Go further, faster with generative AI. We’ll help you prove the value of this technology to your organization and accelerate a fast, secure adoption.
Learn more