AI Will Not Destroy Humanity

AI doesn’t pose an extinction risk. And it has already created brand new jobs in the catastrophizing industry.

The only reason AI industry leaders like Sam Altman and Demis Hassabis jump on that bandwagon is to encourage more government red tape. If you are a powerful incumbent, asking for as many constraints to your industry as possible makes sense. The EU, ever happy to regulate industries originating elsewhere, is delighted to oblige. With compliance departments of thousands, these massive organizations can handle any amount of regulation thrown at them. But a lean startup will get regulated out of business.

The most fascinating part of AI is local, small-scale AI. We currently have massive, centralized AI running in enormous data centers. But since LLaMA escaped from the Facebook lab, tinkerers and hobbyists have already built Large Language Models on their local computers. But, of course, OpenAI, Microsoft, and Google would like small competitors to be regulated away.

Did You Hear the One About the Gullible Lawyer?

You need the best arguments to win a discussion, get a project approved, or win a court case. But, if you are short of preparation time, you might take a shortcut like the New York Lawyer who asked ChatGPT for help.

Ever willing to help, ChatGPT offered six cases supporting the lawyer’s argument. Unfortunately, they were entirely made up. That might work if you write a marketing blog post, but it does not hold up in court. The gullible lawyer claims he did not know that ChatGPT might be hallucinating but is, of course, facing sanctions for lying to the court.

IT professionals know that ChatGPT cannot be trusted to answer truthfully. It is not much of a problem for a programmer because the compiler or the unit tests will catch defective answers. But the rest of the world doesn’t know.

Now is the time to remind everyone in the organization of your company policy on using ChatGPT and its ilk (you do have such a policy, right?). Tell the story of the gullible New York lawyer to make the point clear.

A Value-Destroying Technical Innovation

The important part is not the technology itself. It is how it interacts with its surroundings.

The big Ethereum upgrade (aka “The Merge”) seems to have been successful from a technical standpoint. But it seems that the Ethereum community focused on the enormous technical challenge of merging the existing Ethereum blockchain with another without stopping either. The problem is that changing from proof-of-work to proof-of-stake turned Ether tokens from a currency into a security. When you “stake” your Ether, you earn interest. And suddenly, the Ethereum ecosystem is subject to U.S. Securities and Exchange Commission (SEC) rules. Consequently, Ether is down 26% this week.

You can implement highly advanced technology with enough skill, time, and money. But unless you have someone skeptical think through how your tech will interact with its environment, all of the tech wizardry might go unused. It might even be destroying value as the Ethereum Merge did.

You Need an Agile End Result More Than an Agile Process

An agile development process is not important. An agile end result is. If your organization realizes a benefit from an agile methodology, that only helps you during the relatively short development process. But if you build something that can easily be re-configured and changed, that benefits you for the year or decades that you are running the system.

You would think that a digital billboard would be agile. The whole point is that it can display whatever you want. But German advertisers and shops have just realized that their display screens have very narrow agility. A new law requires these energy-guzzling billboards to be switched off at night to save electricity. It turns out that all the devices were built on the assumption that they would always be on, and they do not take kindly to store employees simply yanking the power cord when they go home.

To achieve agility in your products and systems, you need to avoid hard-wiring your assumptions into them. The only thing you can safely assume is that everything will eventually have to be changed.

The Regulators are Coming

The Chinese are willing to bring the hammer down. The Americans and the Europeans, not so much. Draconian fines are theoretically possible for data privacy violations in the EU, California, and elsewhere in the West but are not imposed. In China, on the other hand, ride-hailing giant DiDi was hit with a $1.2 billion fine, close to the cap of 5% of annual revenue. Not that DiDi didn’t deserve it – regulators have identified 64 BILLION separate data collection violations.

Are you still looking at the puny fines handed out to everybody who is not a vilified American tech giant? Sooner or later, the regulators will start using their power. So you might as well get on top of any problematic data collection habits now.

Pay attention to the rules

It’s probably time to start paying attention to the rules. Inspired by the Silicon Valley ethos of moving fast and breaking things, many organizations have been rolling out technology without much concern for existing rules and regulations.

Uber, Airbnb, and the myriad e-scooter startups are on the back foot all over Europe as the state reasserts its authority. Even in the U.S., regulators have started to put their foot down. Tesla is having to reprogram 50,000 vehicles that were intentionally programmed to disrespect stop signs. If the car was driving slowly and couldn’t see anybody else around an intersection, it would ignore the stop sign and continue into the intersection. That’s illegal, but humans do it all the time. It turns out authorities were less than thrilled to see bad human behavior programmed into Tesla’s cars.

We have rules for a reason. Some of them are ridiculous (like the ubiquitous cooking consent), but good citizenship includes adhering to the rules until you can persuade the rule-maker to change them. Don’t be like Tesla.

Are your AI Projects Legal?

Because the IT industry has failed to agree on any meaningful guidelines for AI usage, regulators are now stepping in. In order to get the attention of the global giants, the proposed EU regulation is threatening with GDPR-style fines of up to 6% of global sales. The rules outlaw some usage, like real-time facial recognition, and place strict limits on other uses. For “high-risk” use by police and courts, companies must provide risk assessment and documentation of how the system comes to its recommendations.

In the US, the Federal Trade Commission has also just weighed in. In a blog post, they clarified that selling or using biased AI might constitute “unfair or deceptive practice” and be subject to fines.

As a CIO or CTO, check who is responsible for ensuring your AI projects adhere to all relevant regulations. Each individual project cannot be responsible for keeping up with rapidly developing global regulations. If you have not appointed someone to keep watch over your AI project, the blame will end on your desk when your organization is found to violate AI regulations you weren’t even aware of.