Ask a Distinguished Engineer:
Generative AI, Cloud Controversies and What’s Really Halting Progress
As new trends and innovations emerge, they promise to reshape the way we work and live. Staying ahead of the curve is crucial for technical talent to succeed in today’s digital age. Insight’s team of Distinguished Engineers understand the stakes and are up to the challenge.
In this featured section of Tech Journal, our Distinguished Engineers answer real questions from other IT professionals and clarify technology misconceptions.
At Insight, Distinguished Engineer (DE) is an industry-recognised IT professional who’s earned the highest levels of thought leadership, mentorship and industry influence. DEs are highly tenured, well-respected and sought-after veterans of the industry. Their perspectives on key trends and evolving technologies give business leaders new ways to approach digital transformation.
Multicloud is largely controversial amongst purists and perhaps some providers, but not in many of our clients’ organisations. In fact, most of them are adopting a hybrid multicloud approach where some workloads are spread amongst the different cloud providers, while others remain on-premises.
Prevailing wisdom from three to four years ago suggested that all workloads should be moved to the cloud. Organisations have since realised that some workloads are purpose-built for the cloud, while others require significant effort to transform before being moved.
For these, it may make more sense to keep the workloads on-premises for the transformational period — or indefinitely — as the effort associated with moving them to the cloud may not outweigh the cost benefit.
In contrast to a hybrid methodology, multicloud involves consuming resources from multiple cloud providers and attempting to homogenise the capabilities across all of them, which is extremely difficult. Most organisations lack the necessary capabilities and likely do not want to pursue this approach. For example, object storage in AWS is very different from how it is done natively in Azure or Google Cloud Platform (GCP). And that’s just one set of APIs and one service. The public clouds are made up for thousands of services — each of which is similar, but incompatible amongst providers.
Although there are some common layers that can be utilised to help with this, it is still generally very difficult. If an application was written with the assumption that it would use one cloud provider’s model for persistence, but then you want to run that workload in another cloud provider, you would need to rewrite the entire persistence layer. This is an incredibly challenging task and likely not worth the effort for many workloads. Despite this, it is something that will likely happen eventually, but it will require a significant amount of effort.
There is work being done by the Independent Software Vendor (ISV) community to create a shim that sits above the layers or a different layer — but that is very much in its infancy. Alternatively, organisations can create their own platforms that expose capabilities to their developers by abstracting the underlying layers. This does require a significant investment but can pay huge dividends. Where we do see this being done effectively, it’s typically part of a Platform Engineering, DevEx and/or Internal Development Platform (IDP) effort.
We are starting to see a significant amount of work in this area, particularly with platforms like Backstage, which provides a platform engineering approach that offers a set of capabilities to developers, curated by a set of peers within the organisation. This ensures that developers can maintain their velocity while still meeting the requirements of their role, while also providing cost controls, availability, and mobility in the hybrid cloud and hybrid multicloud. Private clouds, however, do not operate in the same way as public clouds, which can make the multi-hybrid cloud approach challenging.
Juan OrlandiniChief Technology Officer, Insight North America and Distinguished Engineer
The most senior of our Distinguished Engineers with more than 30 years as a technologist, Juan has demonstrated expertise in everything from networking and IT strategy to server architecture and much more. As chief technology officer, he’s seen this market evolve and transform, leading every step of the way as a client-obsessed thought leader and dedicated mentor.
Connect with Juan on
Essentially, there are three approaches you can use to derive more value from an LLM for your organisation: prompt injection, fine-tuning and original model training.
Prompt injection — also known as prompt engineering, zero shot prompting or multi-shot prompting — is the process of sending supplemental context to the model. With this method, you can give the model additional knowledge through a series of inputs, helping you achieve a more desirable result.
Fine-tuning involves taking the existing model and modifying its parameters to train it for a specific task. (Parameters refer to the knowledge stored in the model itself.)
When a model is trained for the first time, vectors of knowledge are created based on the data provided from sources like the web. These are then used to modify the model’s parameters during the fine-tuning process. If the LLM doesn’t know how to perform a specific task, such as creating marketing material for your organisation, you can train it to take advantage of your expertise in this area.
However, there are some notable ramifications to this method. When you fine-tune or train a model, it can sometimes forget other capabilities it previously had because you’re overriding its previous context. Therefore, it’s important to be careful not to overtrain the model too much, as it may forget how to perform certain tasks.
Finally, if you’re looking to have greater control over your model’s inputs (and overall output), it might be worth evaluating the development of a model based on your own data. This involves taking a model, such as a reinforcement learning model like a Llama 2 model, and using the structural algorithm associated with building that model. This means you can use your own data from the beginning to create an LLM tailored to your specific needs instead of relying on information provided by third parties.
This approach is becoming more popular among organisations with special purpose models, as prompt engineering and fine-tuning may not be sufficient to achieve the desired results. So, if you have enough information or data within your particular segment, training a model based on your own data alone could be a more valuable strategy compared to prompt injection and fine-tuning.
Carmen (Carm) TaglientiChief Data Officer and Distinguished Engineer, Insight
Carm brings more than 25 years of experience to the table with deep expertise in cloud computing, data science, data analytics, cybersecurity and organizational innovation. He is also skilled at building and driving Centers of Excellence, which he’s done for IBM Cloud and AWS Cloud.
Connect with Carm on
Some organizations often focus too much on the technology and forget about the people and process components. Changing people’s habits and muscle memory is difficult and takes time, and organizations need to realize that the personal adoption of new technologies can often be the hardest part.
When implementing a new technology or solution, there is an expectation from leaders that it will solve their problems and/or add value. Although this is likely to eventually hold true, the organization may not experience these benefits immediately.
As with any form of change, especially in the workplace, it takes time to see the intended results. And if it’s a larger organization or enterprise, the path to innovation may be even longer or more complex.
I like to use the analogy of a small speedboat and a large cruise ship. A speedboat can pivot instantly, while the cruise ship requires more time and planning to execute a turn. The larger the ship, the more planning and time it takes, and any unexpected events that occur during the turn can be harder to correct. And it is a far more complex operation to execute successfully than it seems.
Similar to the IT environment, these patterns are present in every industry and client, and it’s important to keep them in mind when considering the adoption of new technology or making other significant changes. Without it, it becomes more difficult to set realistic expectations for the business, determining total level of effort and resources required, and an ROI and Total Cost of Ownership (TCO) that encompass more than just the technology.
Overall, organizational leaders need to be cognizant of the human factor when it comes to innovation and change.
They need to take into account the cognitive load and fatigue that change can cause in individuals. Be sympathetic to the fact that change is difficult and can be emotionally taxing for their teams. And understand this challenge grows with the number of people who experience change in their organization. By doing so, they can more effectively clear the path to innovation, unlock the value of your people and bring about more meaningful change in the future (like gen AI solutions) faster.
Changing people’s habits and muscle memory is difficult and takes time, and organizations need to realize that the personal adoption of new technologies can often be the hardest part.
Jeff BozicPrincipal Architect and Distinguished Engineer, Insight
Jeff has 18 years of industry experience in systems administration, operations and architecture. He leverages his multidiscipline understanding across IT, the interdependencies, and up/downstream effects of change among technology, people, and skills, and operations in his approach to purposeful transformation. He’s passionate about helping clients prioritise and structure their journey to produce value to the business quickly, often and continuously.
Connect with Jeff on
When you start developing a cloud strategy, one of the first things you need to prioritise is a prescriptive way to approach your use of cloud technologies. Just saying you’re in the cloud or executing a boilerplate cloud-related approach is not sufficient. Instead, you need to have a clear pathway, methodology and framework to guide your cloud adoption journey.
A key aspect of optimising and enhancing cloud strategies is selecting the right cloud for the right use case. You should choose a primary cloud provider based on but not limited to your technology stack, development community, cost, industry, model selection and business needs. By choosing primary, secondary and tertiary providers, this allows organisations to accelerate decision-making and generate velocity.
For instance, if your organisation develops a large percentage of C#/.NET applications, Microsoft Azure is a natural fit because the cloud and most of the necessary elements are baked into the developers’ IDE. On the other hand, if your organization has a proclivity toward open source software development, you’ll likely gravitate toward AWS.
Designing for resilient systems and exercising for failure is also essential for successful cloud adoption.
Cloud providers build in most of the tools needed for resiliency in the face of natural disasters, human errors, cyberattacks and many other events that impact system reliability. However, organisations need to use these tools and test them to ensure they work in real-world scenarios. You should follow a prescribed pathway and build elements as code to place you on the “golden path” to unlocking the inherent value found in cloud platforms. To achieve the full benefits of cloud computing, consider the transition from Cloud 1.0 to Cloud 3.0.
Cloud 1.0 is where organisations use cloud as a basic data center; Cloud 2.0 is where organisations optimize for cloud and become more efficient; and Cloud 3.0 is where organizations take advantage of the richness of native services in the cloud. Organisations should aim to move to Cloud 3.0 to unlock the full benefits of cloud computing.
Michael NardoneDirector, Cloud Solutions and Distinguished Engineer, Insight
Michael is obsessed with helping clients achieve business outcomes and deliver value through modern cloud platforms and accelerated software development. In his 22 years of experience with enterprise technology, Michael has held various roles across deep technical administration, engineering, architecture and strategy, with a focus on leadership and equipping teams for change. He enjoys the learning journey and rapid pace that technology brings and strives to build high-performing organisations that embrace those principles.
Connect with Michael on
We want to hear from you: TechJournal@insight.com