Ask a Distinguished Engineer:
Rising roles in tech
As technology continues to evolve, so do the roles and responsibilities of IT teams. It’s important to stay updated on the latest acronyms and industry jargon to remain competitive. Here, our distinguished engineers share their expertise on the changing landscape of technology, breaking down industry acronyms and explaining how (and why) some IT functions are on the rise.
What is a Distinguished Engineer? An Insight Distinguished Engineer (DE) is an industry-recognized IT professional who’s earned the highest levels of thought leadership, mentorship and industry influence. DEs are highly tenured, well-respected and sought-after veterans of the industry. Their perspectives on key trends and evolving technologies give business leaders new ways to approach digital transformation.
FinOps is an emerging art and science and practice in many organizations. It’s a sister art to DevOps and DevSecOps. DevOps has been around for a long time, and we’ve seen the value it brings to organizations firsthand.
DevOps teams and practitioners are not typically financially driven. Although they might have the best intentions — creating products, releasing new features and capabilities — they’re not trained or tasked to do so in a fiscally responsible way.
This is where FinOps comes in. It’s the art of undertaking the DevOps journey in a fiscally responsible way, working in tandem with DevOps as its enabler and partner.
FinOps teams must be highly competent financial planners. They must understand financing, costs and measures. They also have to be excellent at reporting, because that’s the crux of the matter.
The most senior of our Distinguished Engineers with more than 30 years as a technologist, Juan has demonstrated expertise in everything from networking and IT strategy to server architecture and much more. As chief architect, he’s seen this market evolve and transform, leading every step of the way as a client-obsessed thought leader and dedicated mentor.
Connect with Juan on
Furthermore, they must be technically savvy since they need to know whether investing in a particular capability is worth the cost.
For example, if they’re deploying on a particular cloud, using this or that instance, and leveraging other capabilities, it could potentially accelerate the flywheel, but it could also cost a lot of money. Therefore, they must have an outward-looking perspective on Return on Investment (ROI) and an inward-looking perspective on features and capability.
One of the biggest challenges with FinOps is that it’s a complex mix of unique skills, making it an improperly understood science.
Cloud billing, data center billing, edge resource consumption and numerous other factors contribute to the complexity of FinOps. Different kinds of bills, meter types, acquisition models and return on investment models all come into play. This can make obtaining data quite a challenge.
However, there is good news. A lot of work is being done to refine the art and science of developing tools that provide FinOps practitioners with the appropriate data and systems they need to do their job.
Platform engineering is an emerging best practice that many organizations, especially larger ones, are adopting.
What many organizations have come to realize is that when they built their first DevOps practices, they probably put some of their best and brightest people onto those initial projects. Those people were probably amazingly talented at writing code, running infrastructure operations and so on. And because it was so successful, the organization went ahead and spun up more of these DevOps projects.
The problem is that not everyone who participates in things like feature releases likes to handle the operational side of things, is a security practitioner, or understands infrastructure. When it comes to on-premises models, not many DevOps people understand the implications of security across global infrastructure, making it challenging.
This is where the concept of platform engineering comes in, though it’s called different names, such as Internal Development Platform (IDP) and Developer Experience (DevX). The core idea of platform engineering is that DevOps teams work with a line of business to release features. The platform engineering team is responsible for building and identifying the tooling necessary to accelerate the ability of DevOps teams to do that.
They also identify where the product will run — such as on-premises or hybrid — and the APIs required for persistence, functions as a service and anything else being leveraged by DevOps teams. There’s also the site reliability engineers with a slightly different charter, but they all work in conjunction with each other.
It’s an exciting time for platform engineering because it’s accelerating the process of building a common substrate, which allows platforms, developers and site reliability engineers to work together.
Platform engineers provide value by taking some of the choices away and accelerating the ability of developers to release new features by removing the worry of operational details.
It’s essential to work in cooperation with DevOps teams when building a developer journey. It’s not effective to create a platform engineering group, ask them to build a developer a journey and then tell developers to go use it.
The best approach is to identify a common set of systems, work with developers to find the commonality and allow developers to participate in that journey.
Another essential attribute is self-discovery. Here, developers can use APIs to self-discover how to operate using the platform without having to communicate with platform engineers each time. Multiple tools are available, including those from Microsoft, Red Hat and VMware, that provide discovery and automation.
Distinguished Engineer and Portfolio DirectorCarm brings more than 25 years of experience to the table with deep expertise in cloud computing, data science, data analytics, cybersecurity and organizational innovation. He is also skilled at building and driving Centers of Excellence, which he’s done for IBM Cloud and AWS Cloud.
Connect with Carm on
The role of a chief data officer has become critical for many organizations, especially with the shift toward generative AI and the use of data assets. It’s a relatively new role that has been around for 5-10 years. It has come to the forefront due to the movement toward understanding analytics, AI and taking value from data assets.
At the end of the day, it’s all about the data. While we may think we can use these capabilities for good purposes, they are only as good as the data we feed them.
For instance, ChatGPT’s training data was collected from the public domain, such as Wikipedia or books available online. This means that it’s only as good as that data. The same is true for organizations. To provide these capabilities within an organization, you need to understand your data assets — how to curate and govern them and provide value to the organization. That’s where a chief data officer comes in.
A chief data officer is focused on collecting, curating, managing, governing, creating stewardship models, securing, encrypting and understanding the data’s usage in the organization.
As the chief data officer for Insight, I’m responsible for making sure our clients understand how to develop responsible AI within their organizations, how to leverage these capabilities to the best use and how to comply with regulations that may be coming.