AI Will Not Destroy Humanity

AI doesn’t pose an extinction risk. And it has already created brand new jobs in the catastrophizing industry.

The only reason AI industry leaders like Sam Altman and Demis Hassabis jump on that bandwagon is to encourage more government red tape. If you are a powerful incumbent, asking for as many constraints to your industry as possible makes sense. The EU, ever happy to regulate industries originating elsewhere, is delighted to oblige. With compliance departments of thousands, these massive organizations can handle any amount of regulation thrown at them. But a lean startup will get regulated out of business.

The most fascinating part of AI is local, small-scale AI. We currently have massive, centralized AI running in enormous data centers. But since LLaMA escaped from the Facebook lab, tinkerers and hobbyists have already built Large Language Models on their local computers. But, of course, OpenAI, Microsoft, and Google would like small competitors to be regulated away.

Did You Hear the One About the Gullible Lawyer?

You need the best arguments to win a discussion, get a project approved, or win a court case. But, if you are short of preparation time, you might take a shortcut like the New York Lawyer who asked ChatGPT for help.

Ever willing to help, ChatGPT offered six cases supporting the lawyer’s argument. Unfortunately, they were entirely made up. That might work if you write a marketing blog post, but it does not hold up in court. The gullible lawyer claims he did not know that ChatGPT might be hallucinating but is, of course, facing sanctions for lying to the court.

IT professionals know that ChatGPT cannot be trusted to answer truthfully. It is not much of a problem for a programmer because the compiler or the unit tests will catch defective answers. But the rest of the world doesn’t know.

Now is the time to remind everyone in the organization of your company policy on using ChatGPT and its ilk (you do have such a policy, right?). Tell the story of the gullible New York lawyer to make the point clear.

Are You Afraid Robots Will Take Your Job?

Robots are not taking our jobs. It’s a good story to create eye-catching headlines and generate clicks, but the numbers do not support it in any way.  Michael Handel of the U.S. Bureau of Labor Statistics has published a paper where he carefully analyzes job losses across many professions. He finds that job losses follow long-term trends, and there is no hint of the dramatic changes predicted by people who make a living from predicting that the sky will shortly fall.

That matches what I see in the organizations I work with. Traditional IT projects regularly fail, and AI projects have an even higher failure rate. They might deliver something, but too often, it turns out to be impossible to move an AI experiment out of the lab and into productive use.

Additionally, in the cases where AI does provide real business benefits, it handles one specific task and not a whole job. All of our AI today is very narrowly trained for one task. That frees up workers to do more useful things with their time, making them more productive.

For example, the illustration for this post is made by me and the Midjourney AI. It was told to illustrate “the robots are not taking our jobs.” We ran a few iterations where I selected the best of its suggestions until we came up with this image.

Are You Monitoring Your Automated Systems?

It is hard to anticipate the real world. I’m sure the wet concrete on the road in Japan looked just like solid ground to the delivery robot. Consequently, it happily trundled into the urban swamp and got stuck. The story does not report whether the delivery company managed to get their robot out before the concrete hardened…

This is why you need careful monitoring of all the fully automated systems you are deploying. The first line of defense is automated metrics and their normal interval. For a delivery robot, the distance covered over a minute should be greater than zero and less than 270 (if you have limited the robot to e.g. 10 mph). The second line of defense consists of humans who will evaluate the alarms and take appropriate action. The third line of defense are developers who will fix the software and the alarms.

Too many automated systems are simply unleashed and depend on customers to detect that something is wrong and complain. You want to figure out you have a problem before the image of your robot encased in concrete starts trending on Twitter.

Are your AI Projects Legal?

Because the IT industry has failed to agree on any meaningful guidelines for AI usage, regulators are now stepping in. In order to get the attention of the global giants, the proposed EU regulation is threatening with GDPR-style fines of up to 6% of global sales. The rules outlaw some usage, like real-time facial recognition, and place strict limits on other uses. For “high-risk” use by police and courts, companies must provide risk assessment and documentation of how the system comes to its recommendations.

In the US, the Federal Trade Commission has also just weighed in. In a blog post, they clarified that selling or using biased AI might constitute “unfair or deceptive practice” and be subject to fines.

As a CIO or CTO, check who is responsible for ensuring your AI projects adhere to all relevant regulations. Each individual project cannot be responsible for keeping up with rapidly developing global regulations. If you have not appointed someone to keep watch over your AI project, the blame will end on your desk when your organization is found to violate AI regulations you weren’t even aware of.

Missing AI Results

It turns out AI was not about to cure cancer. There was no shortage of hyperbole when IBM’s Watson AI beat the best humans at Jeopardy, but IBM has been unable to create a viable business from their AI prowess. Now their AI-powered health department is for sale if anybody wants a slightly used AI with one careful owner.

AI has proven its worth in many places, also in healthcare. But they have been narrow, well-defined areas like examining X-rays or flagging possibly fraudulent insurance claims. Just throwing a bunch of data scientists and an AI at a problem does not work.

If you have AI projects like Watson that has not delivered the results they promised, you can re-scope them try to harvest some value from solving a smaller and more well-defined problem. Or you can shut them down. The age of unquestioned spending on AI is over.

Use Real Intelligence Instead of the Artificial Kind

If you can leverage real user intelligence in your systems instead of the artificial kind, you get a better result with less effort. But it takes some intelligent thinking by your developers to get to that point.

The new Microsoft Edge (version 88) that rolls out soon has crowdsourced the difficult decision of which browser notifications to allow. Users are tired of constant “Allow this website to send you notifications?” prompts, but it didn’t work to just make all of them more unobtrusive. Microsoft tried that first with “quiet” notification requests, but that meant many users were missing out on the notifications they did want. Instead, the upcoming version will use the decisions by all Edge users to decide which notification requests to show. If everybody else has refused notifications from a specific website, the Edge infrastructure learns that and defaults to not show notification requests from that site.

Do you have ways to harvest the decisions your users are already making and use that data to improve your systems? Put your data scientists to work on the challenge of using human intelligence instead of continuing to try to train AIs.

Who Gets to be in the Office?

What happens if more people want to be in the office than can safely be accommodated? With coronavirus distancing rules, you can use less of your space. As employees get work-at-home jitters and want to come in to the office to get away from the kids and congregate at the coffee machine,  you might run out of space.

A New York startup, faced with some of the most expensive office space in the world, had this problem. There are many considerations to balance: Do teams need to work together? Do you want people from different parts of the company to meet? Do you need to give everyone equal visibility in the office?

They decided to build an AI-based algorithm to select who gets one of their coveted office spots. How do you decide who gets to be in the office? That is a leadership decision and not something that should be left to chance.