A Teachable Moment

We remember stories. And the Crowdstrike-caused massive Windows outage is a good story.

If you work in Delta Airlines IT, you won’t forget this story anytime soon. As millions of passengers are stranded and separated from their luggage, you will probably see your CEO hauled in front of Congress for public shaming.

If you are responsible for some of the around 10 million Windows computers that Crowdstrike, in their incompetence, managed to bring down, you are also likely to remember.

But if you dodged the bullet this time, the whole debacle will become just another tech story in your news feed and quickly forgotten.

However, there are lessons to be learned about canary deployment, robustness against poisoned data, and undocumented software dependencies. To ensure your organization makes the most of this opportunity, have someone read the Crowdstrike Preliminary Post Incident Review and tell the story at your next department meeting. Have them tell everyone why it happened and why it couldn’t happen to you. Or why it could have happened to you, but for the grace of God.

A continually learning organization needs a way to make knowledge stick in its people’s brains. Storytelling is an excellent way to do that. Always be on the lookout for good stories.

Avoiding Project Failure, the Frank Gehry Way

Projects by famous architect Frank Gehry are always completed on time and on budget. That’s not because he only does small and easy things – for example, he designed the Guggenheim Museum Bilbao.

But what he does do is prepare carefully. It might take several years for Mr. Gehry to plan, build scale models, and solve engineering challenges. That all happens cheaply before the construction team moves in with thousands of people and heavy machinery. Sometimes, this preparation means a project is not done. That’s because Gehry will discover in advance that the project as envisioned cannot be completed with the time and budget available.

We’ve just wasted $10 million of taxpayer money for several years in a row here in Denmark because nobody here works like Frank Gehry. The politicians decided to allocate money for “AI signature projects,” and nothing came of them in 2020. So, they allocated another $10 million in 2021. Same result. In 2022, another $10 million was wasted.

The money would not have been wasted if these projects generated new knowledge. But they didn’t. They spent money on data scientists and programmers only to discover afterward either that they did not have the data they needed to train their AIs or that their use of AI violated existing legislation and citizens’ rights.

That could have been discovered cheaply before the programmers started coding. But everybody wanted to run the project. When you are considering a project in your organization, especially in a fashionable technology like AI, you need an independent outsider to review your business case. That’s one of the things I do for my customers. Get in touch to hear more.

Blocking AI is an Unwinnable Battle

Using AI is not cheating. It is a way to become more productive. You pay your employees because they perform tasks that create value for the organization. So it makes sense to let them use the best tools available to do their jobs.

Just like some schools are trying to prevent students from using AI, some companies are trying to outlaw AI. It won’t work. Research shows that 47% of people who used AI tools experienced increased job satisfaction, and 78% were more productive. You can’t fight such dramatic numbers with a blanket prohibition. If you try, your employees will use AI on their phones or in an incognito browser session while working from home.

By all means create rules about how and where employees can use AI, and explain them thoroughly. But trying to ban AI is futile.

Review of “The Collapse of Complex Societies”

The Collapse of Complex Societies by Joseph A. Tainter is a well-researched and erudite account of why ancient complex societies collapsed. Tainter makes a solid point that increasing complexity is a natural and rational response as a society grows. The author refreshingly argues that collapse is not an unmitigated disaster but a rational reversal to a society of lower complexity and smaller units when the overhead of the complex society no longer offers a benefit larger than its cost.

Tainter correctly identified the mechanism of diminishing returns as the main reason why a society runs up against its limits. However, instead of growth and complexity leveling off as the marginal benefits decrease, he argues that societies continue to increase in complexity even when the cost exceeds the benefit. He rightly debunks insufficient leadership as a reason for collapse but, at the same time, assumes that a society would sleepwalk into increasing bureaucracy beyond what makes economic sense.

He makes a much stronger argument about societal fragility. A society with a resource surplus can respond to various calamities, but as complexity increases, it will eventually consume all available resources, leaving no buffer to handle situations that were previously manageable.

Unless you are a scholar of archeology, you do not have to read the entire book. John Danaher has written a great summary here.

Business Knowledge Beats Technical Skill

Most of the value of an IT developer comes from his knowledge of the business. His knowledge of specific programming languages or tools comes a distant second. With AI-supported development tools like Copilot, this value balance becomes even more skewed towards business skills.

That’s why I’m appalled every time I see yet another company replacing hundreds of skilled IT professionals. I’ll grant you that some organizations have too many people and might need to trim their headcount. But often, organizations are trying to kickstart a digital transformation by replacing old hands with a crowd of bright-eyed young things with the latest buzzwords on their CV.

Getting a new crew with better tools and techniques means you can build software faster. But by getting rid of the experienced people, you lose your ability to build the right software. Moving slowly in the right direction beats running fast in the wrong direction.

Show it, Don’t Just Talk About it

Do you still remember the world before ChatGPT? That was one year ago. It grew to one million users just five days after its launch on November 30th, 2022, and became the fastest-growing consumer product in history.

The advances in Large Language Models had been discussed by researchers for some time, but the general public didn’t understand the implications. Until the WTF epiphany, everyone had when they interacted with the product for the first time.

To get buy-in for new products or digitalization projects, you must give your audience and decision-makers a functioning prototype product to generate enthusiasm. The spreadsheet showing a solid business case only appeals to the brain’s left hemisphere. But the prototype Minimum Viable Product can engage emotions in the right side of the brain. Positive feelings and enthusiasm get complex new projects started and get them past the inevitable hiccups along the way.

You cannot build these MVPs quickly if you don’t have a Rapid Application Development tool in your toolbox. That leaves you only with spreadsheets and the annual budgeting process to get new things off the ground. Organizations that can build rapid prototypes will be able to seize opportunities and will overtake those who can’t.

You Don’t Want a Sam Altman

You don’t want a Sam Altman in your organization. If you have, you’re not running an IT organization. You are just administering a cult.

I’m all for having brilliant and charismatic performers in the organization. However, having individuals perceived internally and externally as indispensable is not good. Mr. Altman admitted as much back in June when he said, “No one person should be trusted here. The board can fire me, I think that’s important.”

It turns out that the board couldn’t fire him. He had carefully maneuvered himself into a position where investors and almost everyone on the team believed that OpenAI would come crashing down around their ears if he left, costing them the billions of dollars of profit and stock options they were looking forward to.

Make a short list of your organization’s 3-5 star performers. For each of them, ask yourself what would happen if they were let go or decided to leave. If any of them are in a Sam Altman-like position, you have a serious risk to mitigate.

On-premise culture

The boss wants you back in the office. He has a point.

The point is that unless your organization was born fully remote, it is stuck with an on-premise culture. You can try to fight it. But remember what happened the last time a new strategy initiative was launched? Your organizational culture completely dominated the new ideas until you did things the way you had always done them. That is what management guru Peter Drucker meant when he said that “culture eats strategy for breakfast.”

In an on-premise culture, relationships are built through in-person interactions. The exciting projects, the conference trips, and the promotions go to the people seen in the organization. You can argue that’s not fair, but all the leaders in your organization grew up in an on-premise culture.

In an on-premise culture, new ideas germinate from chance encounters. The two Nobel Prize winners in medicine this year met at the copy machine. Both were frustrated that nobody took their ideas about mRNA seriously. They started working together, and their work enabled the coronavirus vaccine.

The fully remote organization is a technologically enabled deviation from how humans have organized themselves for thousands of years. Building the culture that makes such an organization work takes precise and conscious decisions. That goes into its DNA from the founding. You cannot retrofit fully remote onto an on-premise culture.

The ROI on AI Projects is Still Negative

Unless you are Microsoft, your IT solutions are expected to provide a positive return on the investment. You might have heard that Microsoft loses $20 a month for every GitHub Copilot customer. That’s after the customer pays $10 for the product. If you are a heavy user of Copilot, you might be causing Microsoft a loss of up to $80 every month.

Some organizations are rich enough to be able to afford unprofitable products like this. They typically have to spend their own money. VCs seem to have soured on the idea that “we lose money on every customer, but we make up for it in volume.”

If you are running an AI project right now, you should be clear that it will not pay for itself. Outside a very narrow range of applications, typically image recognition, AI is still experimental. If you have approved an AI project based on a business case showing a positive ROI, question the assumptions behind it. The AI failures are piling up, and even the largest, best-run, and most experienced organizations in the world cannot make money implementing AI yet. You probably can’t, either. Unless you have money to burn, let someone else figure out how to get AI to pay for itself.

AI is not Coming for Your Job

Unless you write corporate mission statements, AI is not coming for your job. Generative AI like ChatGPT works by continually adding the most likely next word. That ensures that an AI-written text is a bland average of all the texts it has read. It is unlikely to be thought-provoking or even useful.

I was reminded of how useless an AI-generate text is when LinkedIn invited me to participate in a “collaborative article.” The AI generates a text on a subject, and I am supposed to add a real-life story or lesson next to that. Unfortunately, the AI text is a collection of trivial platitudes. LinkedIn asked me to rate the article, and I immediately clicked “It’s not so great” (because there was no lower rating). Unfortunately, the feedback options did not include “Your AI text adds no value.”

The striking writers in Hollywood want guarantees from the studios that they won’t be replaced with AI. They need not worry. A script written by AI will be mind-numbingly boring. What AI might do for the film and TV industry is to take over boring housekeeping tasks like ensuring continuity – was the blood on his left or right jacket sleeve? But it won’t write the next hit show or movie.

The right way to use AI in its current state is to use it deductively – to analyze stuff. Programmers who inherit a huge pile of undocumented code benefit from having ChatGPT or its siblings explain the code. Using AI inductively to generate text might be fun, but it doesn’t create any value.