Business Knowledge Beats Technical Skill

Most of the value of an IT developer comes from his knowledge of the business. His knowledge of specific programming languages or tools comes a distant second. With AI-supported development tools like Copilot, this value balance becomes even more skewed towards business skills.

That’s why I’m appalled every time I see yet another company replacing hundreds of skilled IT professionals. I’ll grant you that some organizations have too many people and might need to trim their headcount. But often, organizations are trying to kickstart a digital transformation by replacing old hands with a crowd of bright-eyed young things with the latest buzzwords on their CV.

Getting a new crew with better tools and techniques means you can build software faster. But by getting rid of the experienced people, you lose your ability to build the right software. Moving slowly in the right direction beats running fast in the wrong direction.

Show it, Don’t Just Talk About it

Do you still remember the world before ChatGPT? That was one year ago. It grew to one million users just five days after its launch on November 30th, 2022, and became the fastest-growing consumer product in history.

The advances in Large Language Models had been discussed by researchers for some time, but the general public didn’t understand the implications. Until the WTF epiphany, everyone had when they interacted with the product for the first time.

To get buy-in for new products or digitalization projects, you must give your audience and decision-makers a functioning prototype product to generate enthusiasm. The spreadsheet showing a solid business case only appeals to the brain’s left hemisphere. But the prototype Minimum Viable Product can engage emotions in the right side of the brain. Positive feelings and enthusiasm get complex new projects started and get them past the inevitable hiccups along the way.

You cannot build these MVPs quickly if you don’t have a Rapid Application Development tool in your toolbox. That leaves you only with spreadsheets and the annual budgeting process to get new things off the ground. Organizations that can build rapid prototypes will be able to seize opportunities and will overtake those who can’t.

You Don’t Want a Sam Altman

You don’t want a Sam Altman in your organization. If you have, you’re not running an IT organization. You are just administering a cult.

I’m all for having brilliant and charismatic performers in the organization. However, having individuals perceived internally and externally as indispensable is not good. Mr. Altman admitted as much back in June when he said, “No one person should be trusted here. The board can fire me, I think that’s important.”

It turns out that the board couldn’t fire him. He had carefully maneuvered himself into a position where investors and almost everyone on the team believed that OpenAI would come crashing down around their ears if he left, costing them the billions of dollars of profit and stock options they were looking forward to.

Make a short list of your organization’s 3-5 star performers. For each of them, ask yourself what would happen if they were let go or decided to leave. If any of them are in a Sam Altman-like position, you have a serious risk to mitigate.

On-premise culture

The boss wants you back in the office. He has a point.

The point is that unless your organization was born fully remote, it is stuck with an on-premise culture. You can try to fight it. But remember what happened the last time a new strategy initiative was launched? Your organizational culture completely dominated the new ideas until you did things the way you had always done them. That is what management guru Peter Drucker meant when he said that “culture eats strategy for breakfast.”

In an on-premise culture, relationships are built through in-person interactions. The exciting projects, the conference trips, and the promotions go to the people seen in the organization. You can argue that’s not fair, but all the leaders in your organization grew up in an on-premise culture.

In an on-premise culture, new ideas germinate from chance encounters. The two Nobel Prize winners in medicine this year met at the copy machine. Both were frustrated that nobody took their ideas about mRNA seriously. They started working together, and their work enabled the coronavirus vaccine.

The fully remote organization is a technologically enabled deviation from how humans have organized themselves for thousands of years. Building the culture that makes such an organization work takes precise and conscious decisions. That goes into its DNA from the founding. You cannot retrofit fully remote onto an on-premise culture.

The ROI on AI Projects is Still Negative

Unless you are Microsoft, your IT solutions are expected to provide a positive return on the investment. You might have heard that Microsoft loses $20 a month for every GitHub Copilot customer. That’s after the customer pays $10 for the product. If you are a heavy user of Copilot, you might be causing Microsoft a loss of up to $80 every month.

Some organizations are rich enough to be able to afford unprofitable products like this. They typically have to spend their own money. VCs seem to have soured on the idea that “we lose money on every customer, but we make up for it in volume.”

If you are running an AI project right now, you should be clear that it will not pay for itself. Outside a very narrow range of applications, typically image recognition, AI is still experimental. If you have approved an AI project based on a business case showing a positive ROI, question the assumptions behind it. The AI failures are piling up, and even the largest, best-run, and most experienced organizations in the world cannot make money implementing AI yet. You probably can’t, either. Unless you have money to burn, let someone else figure out how to get AI to pay for itself.

AI is not Coming for Your Job

Unless you write corporate mission statements, AI is not coming for your job. Generative AI like ChatGPT works by continually adding the most likely next word. That ensures that an AI-written text is a bland average of all the texts it has read. It is unlikely to be thought-provoking or even useful.

I was reminded of how useless an AI-generate text is when LinkedIn invited me to participate in a “collaborative article.” The AI generates a text on a subject, and I am supposed to add a real-life story or lesson next to that. Unfortunately, the AI text is a collection of trivial platitudes. LinkedIn asked me to rate the article, and I immediately clicked “It’s not so great” (because there was no lower rating). Unfortunately, the feedback options did not include “Your AI text adds no value.”

The striking writers in Hollywood want guarantees from the studios that they won’t be replaced with AI. They need not worry. A script written by AI will be mind-numbingly boring. What AI might do for the film and TV industry is to take over boring housekeeping tasks like ensuring continuity – was the blood on his left or right jacket sleeve? But it won’t write the next hit show or movie.

The right way to use AI in its current state is to use it deductively – to analyze stuff. Programmers who inherit a huge pile of undocumented code benefit from having ChatGPT or its siblings explain the code. Using AI inductively to generate text might be fun, but it doesn’t create any value.

The Guard Rail Pattern

There is a simple way to prevent many IT disasters, and it is sadly underused. It’s not on the standard lists of design patterns, but I call it the “Guard Rail” pattern.

It would have prevented the IT disaster that dominates the news cycle in Denmark these days. Techno-optimists have forced a new digital building valuation on the long-suffering Danes, and it is an unmitigated catastrophe. The point is to replace the professional appraisers who determine the value of a property for tax purposes with a computer system. And many of the results from the computer are way off. Implementing a Guard Rail pattern would mean that the output from the new system would be compared to the old one, and those valuations that are, for example, 3x higher would be stopped and manually processed.

A colleague just shared a video of the latest iteration of the Tesla Full Self Driving mode. This version seems to be fully based on Machine Learning. Previous versions used ML to detect objects and traditional algorithmic programming to determine how to drive. As always infatuated with his own cleverness, Elon Musk does not seem to think that guard rails are necessary. Never mind that the FSD Tesla would have run a red light had the driver not stopped it. Implementing the Guard Rail pattern would mean that a completely separate system gets to evaluate the output from the ML driver before it gets passed to the steering, accelerator, and brakes.

When I attach a computer to my (traditional) car to read the log, I can see many “unreasonable value from sensor” warnings. This indicates that traditional car manufacturers are implementing the Guard Rail pattern, doing a reasonableness check on inputs before it passes the values to the adaptive cruise control, lane assist, and other systems. But the Boeing 737 MAX8 flight control software was missing a crucial Guard Rail, allowing the computer to override the pilot and fly two aircraft into the ground.

In your IT organization, discuss where it makes sense to implement the Guard Rail pattern. Your experienced developers can probably remember several examples where Guard Rails would have saved you from embarrassing failures. There is no need to keep making these mistakes when there is an easy fix.

Another Large IT Project Failure – and How it Could Have Been Avoided

The City of Birmingham can be added to the long list of organizations that went bankrupt trying to replace their ERP system. They were running a heavily customized SAP system and tried to implement Oracle Fusion. As often happens in this kind of project, the costs exploded from the initial estimate of $25 million to $125 million by the last count. They are not done yet, and since they’ve stopped paying their bills, they might never be.

When you are faced with a legacy system no longer fit for purpose, don’t fall prey to the dangerous illusion that you can run one large project to replace it. A project is a collaborative enterprise intended to reach a well-defined goal. But for a large IT project, the project duration alone (four years and counting in Birmingham) ensures that the goalposts will have moved several times before you are done. Your Program Manager is not likely to be among the few hundred people in the world with the exceptional project and change management skills needed to pull off such a project.

A series of smaller projects to carve out and replace functionality in smaller chunks does not promise to solve all your problems in one fell swoop. But it has a much higher chance of success.

Would You Notice the Quality of Your AI Dropping?

You know that ChatGPT is getting more politically correct. But did you know that it is also getting dumber? Researchers have repeatedly been asking it to do tasks like generating code to solve math problems. In March, ChatGPT 4 could generate functioning code 50% of the time. By June, that ability had dropped to 10%. If you’re not paying, you are stuck with ChatGPT 3.5. This version managed 20% correct code in March but was down to approximately zero ability in June 2023.

This phenomenon is known to AI researchers as “drift.” It happens when you don’t like the answers the machine gives, and take the shortcut of tweaking the parameters instead of expensively re-training your model on a more appropriate data set. Twisting the arm of an AI to generate more socially acceptable answers has been proven to have unpredictable and sometimes negative consequences.

If you are using any AI-based services, do you know what the engine behind the solution is? If you ask, and your vendor is willing to tell you, you will find that most SaaS AI solutions today are running ChatGPT with a thin veneer of fine-tuning. Unless you continually test your AI solution with a suite of standard tests, you will never notice that the quality of your AI solution has gone down the drain because OpenAI engineers are pursuing the goal of not offending anyone.

Do Your Employees Follow your AI Guidelines?

Unless you override it, your organization’s policy for AI-driven tools is “anything goes.” That’s because your developers want to get their job done as quickly as possible. If that involves having Github Copilot write part of the code or copying a code block into ChatGPT for debugging help, so be it.

If you don’t have secrets, maybe that’s fine with you. But even though OpenAI is not training ChatGPT on user prompts, they have not been very diligent about keeping them safe. You should assume that everything your developers paste into ChatGPT will eventually leak.

That includes your data. AI tools are very good at data cleaning and visualization. Your Data Scientists are surely pasting data into ChatGPT and getting back fully functional Python code to run in a Jupyter Notebook. Unless you tell them not to.

If I asked one of your developers or Data Scientists about your policy on AI tools, would they know it? And would they follow the rules or would they take the 10x or 100x productivity boost?