You might remember that when Google was formed, they pledged to not be evil in an attempt to showcase how much different they would be than Microsoft. Now, two decades later, the governments across the world are looking at Google like they are evil, and Microsoft isn’t on the list of companies being targeted which include Facebook and Apple. This week Meredith Whittaker followed Clair Stapleton out of the company claiming abuse and retaliation for calling attention to Google bad practices and for helping organize a 20K mass walkout at the company.
Now I doubt anyone organizing a mass walkout would survive in their job in any major company, but this showcases a firm that is focused on eliminating those that speak out against bad behavior not those committing it. If you read the book Brotopia, Google was prominently pointed out as one of the Silicon Valley firms abusing women at scale and these departures certainly suggest that those problems, despite Google’s rhetoric, aren’t going away any time soon.
https://o1.qnsr.com/log/p.gif?;n=203;c=204663295;s=11915;x=7936;f=201904081034270;u=j;z=TIMESTAMP;a=20410779;e=iGoogle’s ethics problem exists to some degree in every company and can become pronounced with power and with the advancement of those that often rise to the top with low empathy. So, understanding the problem is not only important if you plan to do business with ethically challenged firms like Google but also if you want to avoid becoming an ethically challenged firm. And, given ethically challenged firms and executives often end badly, identifying and addressing this problem early could be very important to both your career and stock option founded retirement.
Let’s explore that this week.
The Problem with Ethics
Ethics aren’t absolute they are the “moral principles that govern a person’s behavior” or in this case a company’s behavior. That is open ended, and your moral principles would likely be very different than mine. Ethics goes to the core of right and wrong, good and evil, and it is tied pretty tightly to your frame of reference.
Given there is no universal measure for what moral principles should be in a company we tend to fall back on laws and regulations to govern company behavior but these too are imperfect and they vary so much between states, countries, and governments both in written law and practice that there is no true gold standard for corporate behavior.
I think this is why so many firms end up in trouble because they tend to mix up what they have the power to do with what they should do often putting the firm at high avoidable risk. Some of this is due to decisions being delegated down in a company to folks that don’t understand or just can’t see the related consequences of that decision and some has to do with executives who have a really poor perspective on what is an acceptable risk.
Efforts to assure ethical behavior have had mixed results over time. Internal Audit, which once was like an internal police force ensuring rule compliance has been gutted over time and largely relegated to creating the impression of compliance. Risk managers were a disaster because they just had responsibility and no authority to drive better decisions (scapegoat was their well-earned effective nickname).
We’ve had whistle blowers, but these people tend to lose their careers and the penalties for being a whistle blower almost consistently overwhelm the benefits. Even if you gain a monetary reward it is often in exchange for losing a career.
At the heart of this is what we call Confirmation Bias where we favor information that agrees with a position already taken which means we tend to get rid of those that point out problems or otherwise disagree with something we want to do.
In effect, what is needed is a tool that speaks the truth, provides us with a reasonable and mostly accurate assessment of the balanced benefits and likely consequences of our action, and can place the result within board approved ethical framework.
AI To The Rescue
In theory this is what an AI could do. It can be given secure access to the wealth of information the company has access to including historical outcomes to decisions made across a broad multi-company ecosystem. It can mathematically derive, based on this information, both probabilities and potential outcomes and provide a far more accurate risk assessment than the “gut” approach that most executives seem to use.
It isn’t worried about its job or career so won’t be biased against providing the answers it derives. And it can be designed to update itself based on new information over time and, unless programed in, it doesn’t have Confirmation Bias and can change its mind rapidly as new information becomes available to it.
Oh, and based on what would be a rather extensive database of decisions, it likely could also recommend a different decision that would have a similar outcome but without the risk associated with a questionable decision an executive was contemplating.
Now the only problem would be assuring that using the AI is a requirement because the one thing it wouldn’t fix is the tendency to avoid tools that disagree with a decision a decision maker wants to make.
I should add that an executive that constantly gets results that suggest he or she not make a move under consideration likely should be managed out of the company as potentially too risky for it. This last suggests that protections would need to be placed on the AI to make sure an executive didn’t intentionally corrupt it with bad data to avoid being fired.
At the heart of ethics problems is a lack of common standards for what is and is not ethical. Might often makes right from an operational standpoint but, nearly just as often, results in catastrophic outcomes for firms once governments find out. Google is developing a powerful AI that could help them navigate these issues and provide a decision matrix that should keep the firm from constantly aggravating governments until it is broken up.
It is ironic that it is within Google’s power to use AI to both create a better Google and a better world. But, in their apparent quest to be branded the most ironic company in history, I doubt they’ll even realize this is something they need to do let alone do it. But others can learn from Google’s bad example and hopefully realize that executives need an unbiased resource to help them make decisions consistently where the relative rewards exceed the relative risks. Just because you don’t understand a risk doesn’t mean it doesn’t exist, and AI could help assure that your decisions don’t become career ending. There are a lot of times when all of us could use an unbiased source of advice when we are pondering a questionable move.