Understand AI Hallucinations. How to SUCCEED with Microsoft 365 Copilot even if it's WRONG!

We've all heard horror stories about generative AI tools like ChatGPT, Gemini, or Copilot getting things spectacularly wrong. But do you understand why these tools sometimes get stuff so wrong when ordinarily it seems like they know everything?

In this video we'll explore a little about how the large language models powering AI tools like GPT-4 work, how they are trained, and why they get stuff wrong. Why is this a particular issue for businesses? What is grounding (or retrieval augmented generation)? And if you're adopting AI, what should you do about it?

Whether your tool of choice is Copilot, Copilot for Microsoft 365 (Microsoft 365 Copilot), ChatGPT, Gemini, Claude, or something else, this information equally applicable to learning a little more about how AI works for you.

Nick DeCourcy

Nick DeCourcy is the owner and principal consultant at the Bright Ideas Agency. He has worked extensively in the education and non-profit sectors in areas including operations, facilities, and technology. He is passionate about getting technology implementation right, first time, by fully understanding how it impacts the employee and customer experience.

Previous
Previous

Avoid these Microsoft 365 Copilot adoption mistakes

Next
Next

Why Big Tech's AI assertions are increasingly far from reality