AI Gets Everything Right Now? The Quiet Shift in Copilot Culture

So, AI now gets everything right? Microsoft will be turning off AI disclaimers by default in Microsoft 365 Copilot Chat.

"Some users found the disclaimer too distracting", just like some drivers find 35mph speed limit signs put them off when driving at 70.

But all is well, because if your organization chooses to turn it back on, it will now be in bold.

From my perspective the concern shouldn't be whether or not these disclaimers are distracting, but whether they are effective in making AI use safer.

Microsoft has a fantastic research arm, WorkLab, that has delivered consistent, high quality, reporting on the efficacy of AI in the workplace. Nowhere have I seen in highlighted in any data from there or anywhere else that AI disclaimers are harmful to the responsible use of AI technology (but nor have I seen evidence that they are helpful).

There any number of aspects of using platforms like Microsoft 365 that get in the way of 100% productivity but are friction points we accept for security and safety. I'd be willing to bet that more users find MFA annoying than AI disclaimers, but no serious vendor is suggesting we turn that off by default.

Ultimately, there are probably a whole series of interventions that should be higher on our priority list to ensure safe and responsible use of AI than a disclaimer on every response. However, turning it off potentially says something about the evolving culture around this issue that will worry many of us familiar with this topic.

First posted on Linkedin on 11/01/2025 -> View Linkedin post here

Previous
Previous

Smart Scheduling, Finally in the Right Place: Polls Come to Outlook

Next
Next

Researcher with Computer Use: Unlocking Gated Data and Raising New Questions