Excel Addiction

It’s not your data, it’s the company’s data. That’s why it belongs in a database or some other kind of managed data store, not in your personal Excel files. But it turns out to be very difficult to break a 40-year habit of circumventing Central IT and hacking something together with a few macros.

There is any number of well-documented disasters caused by excessive Excel use, including during the Coronavirus pandemic, where the UK health authorities used an old version to track infections. It took days before anybody noticed that the number of cases was stuck at exactly 65,536.

Everybody is talking about having an AI policy. You need that. But you also need a data policy. And part of that policy is going to be placing limits on Excel.

Digital Sovereignty

You need to think about Digital Sovereignty. Unless you are in the U.S., of course. For everybody else, this is a very salient topic. Especially for us in Denmark these days.

This doesn’t mean that you have to free yourself from every American cloud provider. But it does mean there is a new item in your risk evaluation: Ending up on the Office of Foreign Assets Control (OFAC) blocklist.

Likelihood is Rare (1) for almost everybody. But if Impact is Catastrophic (5), you end up with a medium risk: Mitigate if cost-effective.

Switching costs almost always make it not cost-effective to transition a running system. But when you are building anything new, you don’t have switching costs. And an effective mitigation is to avoid using U.S. providers.

Another Place Not to Use AI Chatbots

Alaska wanted an AI chatbot to give legal advice. Anybody care to guess how that went?

Yes, not well. They are now 15 months into a 3-month project, but expect it to go live this month.

I’ll make a prediction: This project will end up in the bucket of IT projects that sink without a trace, leaving only cost and no business benefit whatsoever. Sadly, one in four IT projects still end in that bucket. My hunch is that this number is increasing, as AI is incorrectly applied to more and more use cases.

If you are going to implement AI with an LLM, don’t do it in a critical application like giving legal advice to citizens. There is no way to stop an LLM from hallucinating. The only way to automate advice and be sure it is correct is with old-school expert systems.

Good AI Advice

Who advises you on AI? Don’t say ChatGPT. Also, don’t take advice from random blowhards on LinkedIn. You need advice from someone who has a realistic view of your situation and your business.

As a consultant, I’m all for having external advisers to bring you an outside perspective. But it is equally important to have a well-founded inside perspective.

I recommend establishing an internal AI advisory board within IT. Appoint some people interested in AI and give them a modest time budget to keep up to date with what is happening in the field. Make it as diverse as possible – juniors, seniors, developers, sysadmins. If you are fortunate enough to have diversity in gender and ethnicity in your IT organization, also utilize that. Have your AI board meet with the CIO/IT leader regularly, and also have them present at department meetings.

Inside people are much more invested in finding AI tools that can truly help in your specific situation. They are also the ones who will suffer if you implement bad AI. That gives them a very good ability to see through exaggerated vendor claims.

The Right and the Wrong Way to Use LLMs

There are two ways to use Large Language Models. One works well, the other much less well. Over the holidays, I’ve been talking to a lot of family and friends about AI, and it turns out that many people conflate these two approaches.

The way that works well is to use LLMs deductively. That means starting with a lot of text and distilling some essence or knowledge from it. Because you are only asking the AI to create something from a text you have given it, it has much less room to run off on a tangent, making stuff up. At the same time, it can show off its superhuman ability to process large amounts of text. In an IT context, this is when you give it dozens of interlinked files and ask it to explain or find inconsistencies or bugs.

The way that doesn’t work well is using LLMs inductively. That is when you ask it to produce text based on a short prompt and its large statistical model. This allows it to confabulate freely, and the results are hit-or-miss. In an IT context, this is where you ask it to write code. Sometimes it works, often it doesn’t.

Whenever you discuss LLMs with someone, set the stage by defining the inductive/deductive difference. If people already know, no harm done. If they don’t have this frame of reference, establishing it makes for much better conversations.

Mainframe Mindset

Several dozen Danish banks were down for five hours yesterday. Due to incompetence, not Russian hackers.

They were running on robust mainframe systems, because these have proven over decades that they never go down. But it turns out that running critical systems takes both hardware and skill. And the skill was lacking.

The reason mainframes have historically had very high availability is that they are really well-engineered and they’ve been run by really competent people. But those people have reached retirement age, and their jobs are gradually taken over by people with a different mindset. That’s how the mainframe hosting provider managed to run a poorly tested capacity management system that accidentally deallocated resources from all their customers.

There is mainframe hardware, and there is the mainframe mindset. The “this can never, ever, be allowed to fail” mindset. Which is retiring.

Are you sure you are transferring not only skills but also attitude when training new people to take over your critical systems?

The First Thing That Comes to Mind

We’re also going to ban social media for young people here in Denmark. It won’t work here either.

There are two possible approaches to a hard problem.

One is to spend time gathering data, defining the real problem, identifying several possible solutions, implementing the most promising one, and checking the result.

The other is to bombastically announce the first solution that comes to mind. That is what politicians and some business leaders do. That’s how we get social media bans, EU proposals for backdoors on every encrypted service, and the recently proposed ban on VPNs in Denmark. These are poorly thought-out solutions that will cause harm without addressing the underlying problem.

Our brains have a strong availability bias, leading us to jump on the first solution that comes to mind. In order to make good decisions, we need to use a framework. Design Thinking is an example of a method that forces us to use the first approach. Don’t just run with a random first idea.

Learning From People, Not From Documents

Implementing AI has a critical and often-overlooked problem that Raman Shah just reminded me of in another discussion: It can only learn what is documented.

When you teach an AI to perform a process in your organization, you train it on all the internal documents you have. But these describe the RAW – Rules As Written. They do not describe the RAU – Rules As Used.

It takes a lot of care to do process discovery properly. You need to have a human being interview the people doing the work to find out how business is actually done.

Work-to-rule is a classic form of industrial action where workers do exactly what they’re told in order to stop production without going on strike. If you train an AI on just your documents, you are asking it for a work-to-rule stoppage.

How Could That Happen?

How could that happen? We always ask that question after a scandal or disaster, because all that went wrong seems so obvious in hindsight.

Here in Denmark, one of the news stories today is about a sperm donor who turned out to have a potentially cancer-causing mutation. Firstly, it should have been detected before his sperm was accepted. Secondly, one person should never have been allowed to father 197 children across Europe. But the system to limit harm was implemented piecemeal, and apparently nobody verified that sperm banks adhered to national laws or their own rules.

When you implement an IT system, things can go wrong. But the people building the system cannot see where. All experience shows that builders are unable to see beyond the “happy path” in which the system delivers the benefits it was designed for. We try to compensate for that with separate testers who did not write a line of code. But that only covers programming errors. Most significant failures involve the processes and people around the IT system.

Do you have an imaginative Red Team that will challenge both the system and the processes around it?

Business knowledge beats technical skill

Business knowledge is more valuable than technical skill. I see again and again that organizations get rid of experienced IT people because they don’t have the latest buzzwords on their CVs. They are replaced with offshore resources or eager young things who tick all the boxes and cost less.

That is a misguided strategy. It takes a long time to accumulate business knowledge because it is not, and cannot be, taught. Someone who has been in the organization for years knows how the business works. That gives them context to interpret requirements and build software that matches how the business really works. A new hire without that knowledge can only build what is written in the spec, which rarely matches what the business needs.

Your technology changes much faster than your business. If you keep hiring new people every time you decide to switch to the latest and greatest technology (AI, anyone?), your people will never have more than 2 or 3 years of business knowledge.

If you need to change technology, it is a much better approach to hire one expert on the new tech and have that person teach your experienced employees. Don’t throw away decades of experience. You’ll miss it when it’s gone.