Microsoft unveils AI Copilot for its 365 apps

Copilot can write a proposal for you in Word and then turn it into a Powerpoint.

Microsoft has just unveiled Copilot, a new AI tool for its Microsoft 365 apps, including Word, Excel, and PowerPoint.

“Copilot marks a new era of computing that will fundamentally transform the way we work,” said Jared Spataro, corporate VP of modern work and business applications at Microsoft, during a virtual event dubbed “The Future of Work With AI.”

Clippy 2.0: Microsoft’s Copilot is based on large language models (LLMs), which are capable of responding to natural language — the kind we humans use to communicate with one another — and generating their own easy-to-understand text in response.

How Copilot uses these abilities depends on the Microsoft 365 application.

If you’re using Word, for example, you can ask Copilot to draft a proposal based on some notes and a list of products — these reference docs can be uploaded right in the prompt. When Copilot is done, you can give it a previous proposal and tell it to format the new one similarly.

In Excel, you might ask Copilot to identify and summarize three trends in a spreadsheet highlighting your company’s sales, and in Powerpoint, you can ask it to create a 10-slide presentation based on the proposal you created in Word.

https://youtu.be/8_lXSmlwk1s

Microsoft also unveiled Business Chat, a chatbot accessible in all of the 365 apps, during the virtual event. It can access the data in your calendar, emails, chats, and more, which can be useful if you want a summary of everything that happened with a specific client that week.

Microsoft says it plans to announce pricing for Copilot “soon,” and for now, it is strictly limiting who has access to the AI.

“We are currently testing Microsoft 365 Copilot with 20 customers, including 8 in Fortune 500 enterprises,” wrote Colette Stallbaumer, general Manager of Microsoft 365 and Future of Work.

Helper bots: A couple of phrases crop up repeatedly in the Copilot and Business Chat demos Microsoft shared: “Remember to check for accuracy” and “AI-generated content may be incorrect.”

That’s because while LLMs are adept at producing human-sounding language, they’re also prone to “hallucinating” — writing things that sound factual, but aren’t.

“Sometimes, Copilot will get it right. Other times it will be usefully wrong.”

Jared Spataro

Microsoft says Copilot is built in part on ​​GPT-4, an LLM developed by OpenAI, the AI research firm that Microsoft invested $10 billion into in January 2023 — while this LLM is less likely to hallucinate than the model currently powering the popular ChatGPT, it’s not immune to the problem.

“Sometimes, Copilot will get it right,” said Spataro. “Other times it will be usefully wrong, giving you an idea that’s not perfect, but still gives you a head start.”

It seems likely that there will also be times Copilot is unhelpfully wrong, though, and dealing with those missteps is going to be the tradeoff of using the AI — yes, Copilot can help you at work, but only with proper supervision, sort of like a green but eager intern.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Meta’s AI assistant just got a major upgrade — here’s how you can use it
Meta has upgraded its AI assistant, Meta AI, with a new LLM (Llama 3) and made it available in far more places.
Boston Dynamics retires dancing Atlas robot — and debuts its electric replacement
A day after retiring its hydraulic Atlas robot, Boston Dynamics released a video debuting its all-electric, workplace-ready replacement.
Why a neurodivergent team will be a golden asset in the AI workplace
Since AI is chained to linear reasoning, workplaces that embrace it will do well to have neurodivergent colleagues who reason more creatively.
In a future with brain-computer interfaces like Elon Musk’s Neuralink, we may need to rethink freedom of thought
In a future with more “mind reading,” thanks to computer-brain interfaces, we may need to rethink freedom of thought.
When an antibiotic fails: MIT scientists are using AI to target “sleeper” bacteria
Most antibiotics target metabolically active bacteria, but AI can help efficiently screen compounds that are lethal to dormant microbes.
Up Next
Subscribe to Freethink for more great stories