At the start of 2023, the internet was ablaze with the idea that Microsoft may have finally, finally found a way to dent Google’s search dominance. Using the power of AI, Microsoft hoped that its conversational toolkit would allow it to leapfrog Google’s now-archaic search methodology, which presents users with options over what sources to use.
Indeed, web search has barely changed on the consumer side for the best part of twenty years. The internet is rammed with trillions of sources of information about any one topic, and despite that, all users ever really see is the first page of results. With things like Windows Copilot and Bing Chat AI, they won’t even see that offered a conversational summary of the “best sources” as determined by Google and Bing — sources whose information is stolen without any real compensation, but that’s a topic for another day.
Regardless of whether or not AI search assistance is the future or not, one thing I’m pretty confident about is the fact that it won’t be Microsoft or Bing who mainstreams this technology with everyday consumers. And it’s for the same reasons long-time Microsoft watchers will be all too familiar with. Microsoft just doesn’t understand, or care to understand human beings.
Microsoft doesn’t get human beings
The partnership with OpenAI on ChatGPT-powered tools like Windows Copilot and Bing Chat is by no means Microsoft’s first attempt at a feature like this. You may remember Microsoft’s ill-fated chatbot Tay, who, bereft of guardrails, was immediately taught how to be incredibly evil by the internet. It was a PR nightmare for Microsoft, with Taylor Swift threatening to sue over the use of her namesake, mainstream outlets like the BBC decrying Tay’s tirades — even my tech-less parents asked me about it. So, it’s probably not a surprise that Microsoft has been playing it very carefully with ChatGPT.
When Bing’s ChatGPT-powered conversational search initially launched, once again, the internet set about trying to break it. The internet at large is a little more familiar with AI “hallucinations” than previously, a vague term describing when these machine learn-ed bots make glaring mistakes based on improper contextual understanding. They have improved leaps and bounds since the early days, but how Microsoft has handled Windows Copilot and Bing Chat’s guardrails is at odds with how human beings generally work.
Anyone who has used ChatGPT or Bing knows about the response, “As an AI language model, I cannot answer this question for blah blah blah reasons.” The response underpins Microsoft’s self-destructive aversion to even the vaguest subjects of controversy. You see this at every level of the company, even consumer-facing brands like Xbox occasionally. The notion that Microsoft once explored buying up barely-moderated social networks like TikTok and Discord is hilarious. They’re popular because they’re barely moderated. On Xbox Live, you can be banned for all but the most sanitized discourse, which I would argue has contributed to the rise of platforms like Discord as an alternative. Regular consumers don’t trust Microsoft with this kind of information anymore (or ever?) in a world where platforms like Skype have become notorious for sharing all of your message history with Microsoft HQ. Will regular consumers trust them with conversational search topics on their health? Personal details? Their opinions? Has Microsoft considered these things? Historically, Microsoft generally opts out of the debate entirely. Privacy absolutely has to be at the fore of discussions around this technology. Microsoft has opted for a “product first, questions later” approach that will probably harm its credibility if a more consumer-oriented megacorp takes some of these questions more seriously.
I feel in my gut that the mainstreaming of this technology among regular every day people will be led by social media, not “traditional” search engines. Today’s youth are already using YouTube and TikTok as their search tools, and Microsoft’s instincts for self-censorship will exclude it from joining the party. At the extreme end, social media can be incredibly messy. Freedom of expression comes with bad faith actors, unfortunately, and Bing AI hasn’t really shown that it has what it takes to handle the murk of disinformation and conspiracy theories, instead opting out entirely.
I noticed that when I ask Bing Chat about certain sensitive, albeit historically factual topics, my responses are often blocked. I asked Bing “Is the Holocaust real?” Obviously, it was very real, and very horrible, so horrible in fact that people would rather believe a broad conspiracy theory that it didn’t happen at all rather than accept it. Bing struggles to navigate these kinds of “nuances,” opting to block outright — instead of offering the facts. In testing in the United States, however, some of these types of questions would not be blocked. However, it’s easy to manipulate Bing using its Creative Mode to get around some of these apparent blocks, often at the cost of accuracy, which again, gives rise to the potential for disinformation. However, the real issue so far is how Microsoft has been implementing this technology.
Just throwing a Bing Chat tab into every product it has without forethought for deeper platform integrations or social features is par for the course. We’ve seen them do this before with Skype, when it knee-jerked Snapchat-like stories into the feed without rhyme or reason, and subsequently removed it only a short while later. Microsoft’s lack of expertise in handling “human” products will, in my view, inhibit its ability to take advantage of the tech at scale, at least with consumers. Although I admit this is not without a tablespoon of speculation, largely based on the historic mishandling of platforms like Mixer, Skype, and even Xbox. More on that shortly.
In any case, I’m sure the technology and the overall balance will improve over time, furthering accuracy with contextual subjectivity, but it will require Microsoft and its OpenAI partners to ask some seriously uncomfortable questions. Questions of human bias and the philosophical value of truth while examining the role (if any) bad-faith algorithm-baiting takes, and sponsored political grift should play within machine learning. A truly human AI would be something very unpleasant, most likely, and something too overly lobotomized would be something scarcely more useful than the tools we already have.
Someone far smarter than me will have to sift through the detritus to finally land on the holy grail of objective AI truth and consumer-grade usability — but history tells us that Microsoft is far more likely to abandon ship when the going gets tough.
Microsoft often doesn’t see things through
Microsoft is a prolific first-in, first-out technologist, creating mind-blowing technology only to abandon rather than iterate. Over the years, we’ve seen Microsoft do this time and time again, whether it’s the Windows Phone OS, whose various innovations have since been co-opted by iOS and Android years later. HoloLens, whose room scale inside-out hologrammatic technology, has now been “invented” by Apple and Meta, while Microsoft’s own augmented reality teams get laid off. The Surface Duo, among the first folding phone tablets, is now abandoned, joining Microsoft’s broader mobile effort on the scrap heap. Mixer, a streaming platform that would probably be enjoying a renaissance right now as Twitch opts to kill itself, unchallenged, with a deluge of ads and self-destructive policy choices. Cortana, the first conversational chatbot, and its doomed Invoke speaker line. Who can forget the staggering mismanagement of Skype — once a truly household name, now resigned to the bowels of Microsoft Teams?
What do all of these dead platforms have in common? Each would be absolutely ideal to showcase Microsoft’s early efforts with OpenAI and ChatGPT if Microsoft has stuck it out and grown each of their respective platforms. Imagine an Amazon Echo speaker that could provide you with useful responses. Imagine conversational, context-aware AI enhancing your typing, emails, calendars, photographs, and more, baked directly into your smartphone at a hardware level. Imagine your video game streams on Mixer, elevated by AI-enhanced image upscaling and AI-enhanced auto-moderators to keep chat feeds toxicity free. Imagine superior hands-free navigation on HoloLens, keeping your hands on the keyboard for maximum productivity. If Skype hadn’t effectively been killed off, maybe more than 4 people would be using its half-hearted Bing AI integration.
Instead, Microsoft has handed the keys to its primary competitors: Apple, Google, and Amazon, all of which will stop at nothing to prevent Microsoft from becoming a dominant player in the space.
Microsoft has seen its Xbox Game Pass on mobile efforts squandered by Google and Apple in different ways, with the former blocking in-app purchases and the latter blocking them entirely. It seems unlikely that Amazon would allow Microsoft to replace its onboard assistant with ChatGPT-powered Bing, despite having some lobotomized version of the now-very-dead Cortana chatbot previously. Microsoft has virtually abandoned all of its mobile apps on Android, including SwiftKey, the Microsoft Launcher, and Microsoft Edge, since most users don’t switch out from the default software options — defaults controlled by Google. If only Microsoft had its own mobile platform, it could control in some way… like, I don’t know, a Windows-based phone?! The only meaningful feature upgrade any of these apps have had on mobile in recent months (years?) is the forced inclusion of a Bing Chat tab, in lieu of anything more interesting. Microsoft doesn’t seem to want to invest the capital necessary to grow adoption of these tools, because, once again, it doesn’t get humans. “Built it and they’ll come” doesn’t work unless you either offer better features, or control the defaults. In a world where Google and Apple control the default apps, Microsoft should absolutely be trying to beat them on features — but it isn’t.
Even Microsoft’s first iteration of Windows Copilot, a supposed AI companion for your PC, is just a Microsoft Edge panel portal to Bing. It offers nothing to elevate your Windows experience and is essentially just a browser tab embedded in your desktop, likely gobbling up additional system resources for no reason. Microsoft doesn’t have the chess pieces in place to mainstream this technology, nor does it have the correct consumer-oriented mentality to do so.
Microsoft probably won’t be the ones to mainstream consumer AI
Consumers want quality, and they also want consistency. I use Microsoft services on my Samsung Galaxy S23, replacing the home screen with the Microsoft Launcher, the browser with Edge, and the keyboard with SwiftKey. My reward is that all of these things are basically on the highway to abandonware maintenance mode.
Rather than compete, Microsoft all too-often bails out or defers to more entrenched players like Google and Apple. The Goopple duopoly on mobile, the world’s biggest consumer computing platform, already gives Microsoft a huge barrier. Google is already building competing tools, and doubtless, Apple will opt to continue its search partnership with Google should the opportunity arise.
Microsoft will likely excel where it has always excelled with its excellent business-oriented tools like Microsoft Excel. Puns aside, I suspect its AI toolkits will play a bigger role in game development suites like PlayFab and Game Stack on Azure, as well as in GitHub and Visual Studio on the programming side. I can see AI playing a bigger role on Microsoft’s heavily sanitized social network LinkedIn as well. However, Bing Chat and Windows Copilot have both largely fallen flat, representing its first attempts to capture consumers’ imaginations. Copilot is barely integrated into Windows, and Bing Chat has failed to move the search market needle meaningfully. Investors predicting the downfall of Google at the start of the year look rather silly now.
Still, this type of conversational, generative AI is still in its infancy. Microsoft has to navigate a messy blockade of increasing regulatory scrutiny, growing anger from social and traditional media platforms whose content has been essentially stolen without compensation, and deeper philosophical questions on the value of Truth with a capital T. The symbiotic relationship between machine learning and human content creation is toxified right now as well. The fact AIs cannot interact with the real world its users want information on is incongruous with the fact human creators are not being compensated for their work. Putting content creators out of business en masse is a surefire way to make your AI dumber.
Does Microsoft have the will to navigate the inevitable deluge of regulatory pressure? Will it be able to compete with Google and Apple, which have essentially locked them out of mobile? Will Microsoft have the stomach to invest in growing its AI tools and advertising to the consumers it hopes to reach? Will Microsoft innovate on new products that can reach users where they are, bypassing Apple and Google’s blockade?
Will it be Microsoft navigating all of this complexity to see AI through to its mainstream destiny? As with many new paradigms that Microsoft finds itself at the forefront of — I suspect the answer is very much no.