Gaming the Metrics

Beneficial Intelligence is out. This week: Gaming the metrics. You need measures to manage, but when measures are used to praise or blame, employees will optimize for them. You need carefully paired metrics in order to avoid people gaming them.

Amazon tried and failed. They have an app to record driving behavior, but they also require a lot of packages delivered in a short time. Delivery companies are instructing their drivers to game the metrics by driving carefully at first, then switch off the app and drive like the devil.

As an IT leader, getting your measurements right is one of the most important part of managing your IT organization. If you are not carefully establishing paired metrics, you can be sure your metrics are being gamed.

Listen here or find “Beneficial Intelligence” wherever you get your podcasts.

Accidental Publication

Beneficial Intelligence is out. This week: Accidental publication. Some data leaks are IT’s own fault. We should be able to prevent developers and users from leaking our data through unsecured cloud storage. We should not roll out systems that leak data if the user edits the URL or views the web page source. Are you sure every system your organization rolls out has been subject to a security review? If not, you might be the next organization to find that you have accidentially published confidential data.

Listen here or find “Beneficial Intelligence” wherever you get your podcasts.

Irrational Optimism

Beneficial Intelligence is out. This week: Irrational optimism. IT people are too optimistic about schedules and business cases. It is a natural consequence of our ability to build something from nothing. As a CIO or CTO, you need to make sure you have some pragmatic pessimists to point out the things that might go wrong. Because IT is too optimistic and Legal & Compliance is too cautious, IT organizations often turn to outsiders like me.

Listen here or find “Beneficial Intelligence” wherever you get your podcasts.

Risk Aversion

In this episode of Beneficial Intelligence, I discuss risk aversion. The U.S. has stopped distributing the Johnson & Johnson vaccine. It has been given to more than 7 million people, and there have been six reported cases of blood clotting. That is not risk management, that is risk aversion.

There is a classic short story from 1911 by Stephen Leacock called “The Man in Asbestos.” In it, the narrator travels to the future to find a drab and risk-averse society where aging has been eliminated together with all disease. People can only die from accidents, which is why everybody wears fire-resistant asbestos clothes, railroads and cars are outlawed, and society becomes completely stagnant.

We are moving in that direction. Large organizations have departments of innovation prevention, often called compliance, risk management, or QA. It takes leadership to look at the larger benefit and overrule their objections Smaller organizations can instead spend their leadership time on innovation and growth.

As an IT leader, it is your job to make sure your organization doesn’t get paralyzed by risk aversion.

User Experience Disasters

This week’s episode of my podcast Beneficial Intelligence is about User Experience disasters. Danes consistently rank among the happiest people in the world, but I can tell you for sure that it is not the public sector IT we use that makes us happy. We have a very expensive welfare state financed with very high taxes, but all that money does not buy us a good user experience.

Good User Experience (UX) is not expensive, but it does require that you can put yourself in the user’s place and that you talk to users. That is a separate IT specialty, and many teams try to do without it. It doesn’t end well. Systems with bad UX do not deliver the expected business value, and sometimes are not used at all. A system that is functionally OK but that the users can’t or won’t use is known as a user experience disaster.

We have a web application for booking coronavirus testing here in Denmark. First you choose a site, then you chose a data, and then you are told there are no times available at that site on that date. If a UX professional had been involved, the site would simply show the first available time at all the testing centers near you. We now also have a coronavirus vaccination booking site. It is just as bad.

As CIO or CTO, some of the systems you are responsible for offer the users a bad experience. To find these, look at usage statistics. If you are not gathering usage, you need to start doing so. If systems are under-utilized, the cause is most often a UX issue. Sometimes it is easy to fix. Sometimes it is hard to fix. But IT systems that are not used provide zero business value.

Listen here or find “Beneficial Intelligence” wherever you get your podcasts.

Who is Listening?

Clubhouse is apparently fairly leaky. It bills itself as an exclusive new form of social media and is iPhone-only and invitation-only. However, that doesn’t mean that everybody can’t listen in. A hacker just proved as much by accessing several supposedly private audio streams. Additionally, all of their back end infrastructure is located in China, letting Chinese authorities listen in as well.

There are very few services that are actually secure. We used to assume that our conversations are private, but that assumption rarely holds. A US school board were bad-mouthing parents on a Zoom they thought were private, but the recording was public. They have now all resigned.

If you have confidential information that will be valuable to an adversary, talk about it in a meeting room in the office. And leave your phones outside.

Contingency Plans

Last week’s episode of my podcast Beneficial Intelligence was about contingency plans. Texas was not prepared for the cold, and millions lost power. The disaster could have been avoided, had the suggestions from previous outages been implemented. But because rarely gets very cold in Texas, everybody decided to save money by not preparing their gear for winter. At the same time, Texans have decided to go it alone and not connect their grid to any neighbors.

In all systems, including your IT systems, you can handle risks in two ways: You can reduce the probability of the event occurring, or you can reduce the impact when it occurs. For IT systems, we reduce the probability with redundancy, but we run into Texas-style problems when we believe the claims of vendors and fail to prepare for the scenario when our redundant systems do fail. 

Texas did not reduce the probability, and was not prepared for the impact. Don’t be like Texas.

Contingency Plans

This week’s episode of my podcast Beneficial Intelligence is about contingency plans. Texas was not prepared for the cold, and millions lost power. Amid furious finger-pointing, it turns out that none of the recommendations from the report after the last power outage have been implemented, and suggestions from the report after the outage in 1989 were not implemented either.

As millions of Texas turned up the heat in their uninsulated homes, demand surged. At the same time, wind turbines froze. Then the natural gas wells and pipelines froze. Then the rivers where the nuclear power plants take cooling water from froze. And finally the generators on the coal-powered plants froze. They could burn coal, but not generate electricity. You can built wind turbines that will run in the cold, and you can winterize other equipment with insulation and special winter-capable lubricants. But that is more expensive, and Texas decided to save that money.

The problem could have been solved if Texas could get energy from its neighbors, but it can’t. The US power grid is divided into three parts: Eastern, Western, and Texas. They decided to go it alone but apparently decided to ignore the risk.

In all systems, including your IT systems, you can handle risks in two ways: You can reduce the probability of the event occurring, or you can reduce the impact when it occurs. For IT systems, we reduce the probability with redundancy. We have multiple power supplies, multiple internet connections, multiple servers, replicated databases, and mirrored disk drives. But we run into Texas-style problems when we believe the claims of vendors that their ingenious solutions have completely eliminated the risk. That leads to complacency where we do not create contingency plans for what to do if the event does happen.

Texas did not reduce the probability, and was not prepared for the impact. Don’t be like Texas.

Listen here or find “Beneficial Intelligence” wherever you get your podcasts.

Risk and Reward

Last week’s episode of my podcast Beneficial Intelligence was about risk and reward. Humans are very good at calculating risk and reward. That means we will do what is best for us, even if it is not the best for the company.

It is easy to create incentives for being fast and cheap, but hard to create good incentives for quality. That’s why we try to use incentives for speed and cost, but try to use QA procedures to ensure quality.

Incentives almost always win over procedures. As CIO, you need to make sure there are also incentives for quality. If not, you can be sure that your procedures will be circumvented, and corners will be cut.

Risk and Reward

This week’s episode of my podcast Beneficial Intelligence is about risks and rewards. Humans are a successful species because we are good at calculating risks and rewards. Similarly, organizations are successful if they are good at calculating the risks they face and the rewards they can gain.

Different people have different risk profiles, and companies also have different appetite for risk. Industries like aerospace and pharmaceuticals face large consequences if something goes wrong and have a low risk tolerance. Hedge funds, on the other hand, takes big risks to reap large rewards.

It is easy to create incentives for building things fast and cheap, but it is harder to create incentives that reward quality. Most organizations don’t bother with quality incentives and try to ensure quality through QA processes instead. As Boeing found out, even a strong safety culture does not protect against misaligned incentives.

As an IT leader at any level, it is your job to consider the impact of your incentive structure. If you can figure out a way to incentivize user friendliness, robustness and other quality metrics, you can create a successful IT organization. If you depend on QA processes to counterbalance powerful incentives to ship software, corners will be cut.

Listen here or find “Beneficial Intelligence” wherever you get your podcasts.