Blocking AI is an Unwinnable Battle

Using AI is not cheating. It is a way to become more productive. You pay your employees because they perform tasks that create value for the organization. So it makes sense to let them use the best tools available to do their jobs.

Just like some schools are trying to prevent students from using AI, some companies are trying to outlaw AI. It won’t work. Research shows that 47% of people who used AI tools experienced increased job satisfaction, and 78% were more productive. You can’t fight such dramatic numbers with a blanket prohibition. If you try, your employees will use AI on their phones or in an incognito browser session while working from home.

By all means create rules about how and where employees can use AI, and explain them thoroughly. But trying to ban AI is futile.

Business Knowledge Beats Technical Skill

Most of the value of an IT developer comes from his knowledge of the business. His knowledge of specific programming languages or tools comes a distant second. With AI-supported development tools like Copilot, this value balance becomes even more skewed towards business skills.

That’s why I’m appalled every time I see yet another company replacing hundreds of skilled IT professionals. I’ll grant you that some organizations have too many people and might need to trim their headcount. But often, organizations are trying to kickstart a digital transformation by replacing old hands with a crowd of bright-eyed young things with the latest buzzwords on their CV.

Getting a new crew with better tools and techniques means you can build software faster. But by getting rid of the experienced people, you lose your ability to build the right software. Moving slowly in the right direction beats running fast in the wrong direction.

Show it, Don’t Just Talk About it

Do you still remember the world before ChatGPT? That was one year ago. It grew to one million users just five days after its launch on November 30th, 2022, and became the fastest-growing consumer product in history.

The advances in Large Language Models had been discussed by researchers for some time, but the general public didn’t understand the implications. Until the WTF epiphany, everyone had when they interacted with the product for the first time.

To get buy-in for new products or digitalization projects, you must give your audience and decision-makers a functioning prototype product to generate enthusiasm. The spreadsheet showing a solid business case only appeals to the brain’s left hemisphere. But the prototype Minimum Viable Product can engage emotions in the right side of the brain. Positive feelings and enthusiasm get complex new projects started and get them past the inevitable hiccups along the way.

You cannot build these MVPs quickly if you don’t have a Rapid Application Development tool in your toolbox. That leaves you only with spreadsheets and the annual budgeting process to get new things off the ground. Organizations that can build rapid prototypes will be able to seize opportunities and will overtake those who can’t.

AI is not Coming for Your Job

Unless you write corporate mission statements, AI is not coming for your job. Generative AI like ChatGPT works by continually adding the most likely next word. That ensures that an AI-written text is a bland average of all the texts it has read. It is unlikely to be thought-provoking or even useful.

I was reminded of how useless an AI-generate text is when LinkedIn invited me to participate in a “collaborative article.” The AI generates a text on a subject, and I am supposed to add a real-life story or lesson next to that. Unfortunately, the AI text is a collection of trivial platitudes. LinkedIn asked me to rate the article, and I immediately clicked “It’s not so great” (because there was no lower rating). Unfortunately, the feedback options did not include “Your AI text adds no value.”

The striking writers in Hollywood want guarantees from the studios that they won’t be replaced with AI. They need not worry. A script written by AI will be mind-numbingly boring. What AI might do for the film and TV industry is to take over boring housekeeping tasks like ensuring continuity – was the blood on his left or right jacket sleeve? But it won’t write the next hit show or movie.

The right way to use AI in its current state is to use it deductively – to analyze stuff. Programmers who inherit a huge pile of undocumented code benefit from having ChatGPT or its siblings explain the code. Using AI inductively to generate text might be fun, but it doesn’t create any value.

Would You Notice the Quality of Your AI Dropping?

You know that ChatGPT is getting more politically correct. But did you know that it is also getting dumber? Researchers have repeatedly been asking it to do tasks like generating code to solve math problems. In March, ChatGPT 4 could generate functioning code 50% of the time. By June, that ability had dropped to 10%. If you’re not paying, you are stuck with ChatGPT 3.5. This version managed 20% correct code in March but was down to approximately zero ability in June 2023.

This phenomenon is known to AI researchers as “drift.” It happens when you don’t like the answers the machine gives, and take the shortcut of tweaking the parameters instead of expensively re-training your model on a more appropriate data set. Twisting the arm of an AI to generate more socially acceptable answers has been proven to have unpredictable and sometimes negative consequences.

If you are using any AI-based services, do you know what the engine behind the solution is? If you ask, and your vendor is willing to tell you, you will find that most SaaS AI solutions today are running ChatGPT with a thin veneer of fine-tuning. Unless you continually test your AI solution with a suite of standard tests, you will never notice that the quality of your AI solution has gone down the drain because OpenAI engineers are pursuing the goal of not offending anyone.

Do Your Employees Follow your AI Guidelines?

Unless you override it, your organization’s policy for AI-driven tools is “anything goes.” That’s because your developers want to get their job done as quickly as possible. If that involves having Github Copilot write part of the code or copying a code block into ChatGPT for debugging help, so be it.

If you don’t have secrets, maybe that’s fine with you. But even though OpenAI is not training ChatGPT on user prompts, they have not been very diligent about keeping them safe. You should assume that everything your developers paste into ChatGPT will eventually leak.

That includes your data. AI tools are very good at data cleaning and visualization. Your Data Scientists are surely pasting data into ChatGPT and getting back fully functional Python code to run in a Jupyter Notebook. Unless you tell them not to.

If I asked one of your developers or Data Scientists about your policy on AI tools, would they know it? And would they follow the rules or would they take the 10x or 100x productivity boost?

AI Will Not Destroy Humanity

AI doesn’t pose an extinction risk. And it has already created brand new jobs in the catastrophizing industry.

The only reason AI industry leaders like Sam Altman and Demis Hassabis jump on that bandwagon is to encourage more government red tape. If you are a powerful incumbent, asking for as many constraints to your industry as possible makes sense. The EU, ever happy to regulate industries originating elsewhere, is delighted to oblige. With compliance departments of thousands, these massive organizations can handle any amount of regulation thrown at them. But a lean startup will get regulated out of business.

The most fascinating part of AI is local, small-scale AI. We currently have massive, centralized AI running in enormous data centers. But since LLaMA escaped from the Facebook lab, tinkerers and hobbyists have already built Large Language Models on their local computers. But, of course, OpenAI, Microsoft, and Google would like small competitors to be regulated away.

Did You Hear the One About the Gullible Lawyer?

You need the best arguments to win a discussion, get a project approved, or win a court case. But, if you are short of preparation time, you might take a shortcut like the New York Lawyer who asked ChatGPT for help.

Ever willing to help, ChatGPT offered six cases supporting the lawyer’s argument. Unfortunately, they were entirely made up. That might work if you write a marketing blog post, but it does not hold up in court. The gullible lawyer claims he did not know that ChatGPT might be hallucinating but is, of course, facing sanctions for lying to the court.

IT professionals know that ChatGPT cannot be trusted to answer truthfully. It is not much of a problem for a programmer because the compiler or the unit tests will catch defective answers. But the rest of the world doesn’t know.

Now is the time to remind everyone in the organization of your company policy on using ChatGPT and its ilk (you do have such a policy, right?). Tell the story of the gullible New York lawyer to make the point clear.

Are You Afraid Robots Will Take Your Job?

Robots are not taking our jobs. It’s a good story to create eye-catching headlines and generate clicks, but the numbers do not support it in any way.  Michael Handel of the U.S. Bureau of Labor Statistics has published a paper where he carefully analyzes job losses across many professions. He finds that job losses follow long-term trends, and there is no hint of the dramatic changes predicted by people who make a living from predicting that the sky will shortly fall.

That matches what I see in the organizations I work with. Traditional IT projects regularly fail, and AI projects have an even higher failure rate. They might deliver something, but too often, it turns out to be impossible to move an AI experiment out of the lab and into productive use.

Additionally, in the cases where AI does provide real business benefits, it handles one specific task and not a whole job. All of our AI today is very narrowly trained for one task. That frees up workers to do more useful things with their time, making them more productive.

For example, the illustration for this post is made by me and the Midjourney AI. It was told to illustrate “the robots are not taking our jobs.” We ran a few iterations where I selected the best of its suggestions until we came up with this image.

Are You Monitoring Your Automated Systems?

It is hard to anticipate the real world. I’m sure the wet concrete on the road in Japan looked just like solid ground to the delivery robot. Consequently, it happily trundled into the urban swamp and got stuck. The story does not report whether the delivery company managed to get their robot out before the concrete hardened…

This is why you need careful monitoring of all the fully automated systems you are deploying. The first line of defense is automated metrics and their normal interval. For a delivery robot, the distance covered over a minute should be greater than zero and less than 270 (if you have limited the robot to e.g. 10 mph). The second line of defense consists of humans who will evaluate the alarms and take appropriate action. The third line of defense are developers who will fix the software and the alarms.

Too many automated systems are simply unleashed and depend on customers to detect that something is wrong and complain. You want to figure out you have a problem before the image of your robot encased in concrete starts trending on Twitter.