Connect with us

Technology

Microsoft Copilot: how to use this powerful AI assistant

Published

on

Microsoft Copilot: how to use this powerful AI assistant
The Copilot logo
Microsoft

In the rapidly evolving landscape of artificial intelligence, Microsoft’s Copilot AI assistant is a powerful tool designed to streamline and enhance your professional productivity. Whether you’re new to AI or a seasoned pro, this guide will help you through the essentials of Copilot, from understanding what it is and how to sign up, to mastering the art of effective prompts and creating stunning images.

Additionally, you’ll learn how to manage your Copilot account to ensure a seamless and efficient user experience. Dive in to unlock the full potential of Microsoft’s Copilot and transform the way you work.

What is Microsoft Copilot?

Copilot is Microsoft’s flagship AI assistant, an advanced large language model. It’s available on the web, through iOS, and Android mobile apps as well as capable of integrating with apps across the company’s 365 app suite, including Word, Excel, PowerPoint, and Outlook. The AI launched in February 2023 as a replacement for the retired Cortana, Microsoft’s previous digital assistant. It was initially branded as Bing Chat and offered as a built-in feature for Bing and the Edge browser. It was officially rebranded as Copilot in September 2023 and integrated into Windows 11 through a patch in December of that same year.

Copilot runs the Microsoft Prometheus model, which is built from OpenAI’s GPT-4 foundational model. Microsoft has invested heavily in OpenAI, pledging nearly $13 billion in April 2023. You’ll see a lot of overlap in terms of functionality between ChatGPT and Copilot because of this cozy working relationship between the two companies.

Copilot is available as both a 365 integration, where it can access a user’s data to provide real-time summarization and analysis on documents, spreadsheets, presentations, and emails, as well as generate text and images, and as a web- and app-based chatbot. The chatbot is available free to users, though Microsoft offers a variety of subscription packages that provide additional features and capabilities.

Advertisement

For example, free users are only afforded a limited number of chat interactions with the model in a given three-hour period (typically about 80 queries and responses). What’s more, they cannot integrate Copilot into 365 applications at the free level. For a $20/month Copilot Pro plan, users gain greater access to the model, drastically increase the number of chats they can have before timing out, and their text-based requests are kicked up to the front of the inference queue — images even generate faster when using Dall-E and Designer.

In October 2024, Microsoft announced new Vision and Voice Interaction features for the Copilot experience on the Edge browser. Copilot Vision is similar to Microsoft’s ill-fated Recall feature, in that it looks over your shoulder while you surf the internet. It stands perpetually ready to answer questions, provide recommendations, and summarize whatever it is you’re currently looking at. To assuage user concerns, Microsoft has made the feature opt-in, meaning you’ll have to actively turn it on every time you use it, and it will display an onscreen indicator whenever it is active. It also won’t work on paywalled or sensitive content, so your browsing history will remain private and secure.

Voice Interactions, similarly, is Microsoft’s response to ChatGPT’s Advanced Voice Mode and Gemini Live. It enables the user to speak to the AI as though it were a regular conversation with another person, using their voice rather than text inputs. Copilot Voice is currently rolling out to Windows users in the U.S. and the U.K., as well as in Australia, Canada, and New Zealand, but only in English. Vision will be released in the U.S. “soon,” according to Microsoft, but will only be made available to Pro subscribers at launch. The company plans to eventually release the feature to more users, though there is no timetable set for that rollout yet.

How to sign up for Copilot

Go to the Copilot website and click the sign-in button in the top right (to the left of the three horizontally stacked lines and to the right of the Get the App! button). Pick whether you’re applying with either a personal account (like Gmail or Proton) or a work/school account (which would have their own domains). At the Microsoft Live sign-in screen, click the “No account? Create one!” link and follow the onscreen instructions.

Advertisement

If you want to upgrade to Pro, sign in/up using the method above. Once you’re on the Copilot home screen, click on the “Try Copilot Pro” radio buttons at the bottom of the screen, just above the context window. Then, again, follow the on-screen instructions and be sure to have a credit card ready. Microsoft will give you the first month of service free but will auto-charge you $20 every following month.

Getting started with Copilot

The Microsoft Copilot home screen
Microsoft

Once you’re logged in, the Copilot homepage offers a number of features and options. In the upper-left corner, you can switch between the Copilot and Notebook options — the former better used for asking questions and conversing directly with the AI, the latter more often used to collaborate with Copilot on longer written works.

The 32k token context window (that’s the total number of tokens that can be included in a prompt-and-response pair) is located at the bottom of the screen. You can upload documents and images for the chatbot to analyze by clicking the picture icon in the lower right of the context window. The microphone icon to its right allows you you simply speak your query rather than type it in. You can also select how rigid Copilot will be in its responses by selecting among the More Creative, More Balanced, and More Precise style options directly above the context window.

In the top-right corner, you’ll see your user name and avatar. Click on that to either sign out or switch to a different account. To the right of your avatar is the settings menu (the three horizontal stacked lines) where you can adjust various privacy, user, and appearance options.

All the way on the right of the screen are the Chats and Plugins menus. Chats is simply a historical list of things you’ve talked about with the AI previously. Check out the “How to manage your Copilot account” section below for more details. Plugins are essentially API calls that you can apply to a given Copilot chat (up to three at a time). They include Instacart, Kayak, and Klarna, which will leverage those companies’ specialized product information to help Copilot better answer your queries about grocery shopping, travel accommodations, and price matching, respectively.

Advertisement

How to effectively use prompts in Copilot

Just because GPT-4 (and, by extension, Copilot) supports 26 different languages doesn’t mean that you can just say anything to it and receive an accurate response. Despite their advanced nature, these are still just computer programs and, as such, you need to really spell out what task, exactly, you want it to perform.

For example, if you ask it to “write a poem about nature,” it’ll return the following:

Amidst the ancient trees, where sunlight weaves, The forest breathes—a symphony of leaves. Moss-clad stones cradle secrets untold, As ferns unfurl, their emerald stories unfold.

Beneath the canopy, a murmuring stream, Its liquid notes dance, a tranquil dream. Wildflowers sway, their colors ablaze, Painting the canvas of these wooded days.

And there, a squirrel scampers, nimble and free, Chasing sunbeams through the verdant sea. The breeze carries fragrant whispers afar, Nature’s lullaby beneath the evening star.

Advertisement

Feel free to wander deeper into the woods, where every rustle and rusted leaf holds a tale.

But if you want that poem to be in the style of a classic Greek epic, you need to actually specify that point to the system. Telling Copilot to “write a poem about nature in the style of Homer’s Illiad,” you’ll get the following, which is much more stylistically on point:

The Song of Aeolus

Sing, O Muse, of the tempests wild, Where Aeolus, the wind-king, held sway. His island fortress crowned by storm-clouds, And gales at his command danced and roared.

Upon the rocky cliffs, he stood resolute, His silver hair streaming like zephyrs. His eyes, deep as the ocean’s abyss, Surveyed the churning seas with ancient wisdom.

Advertisement

“O winds,” he cried, “obey my will! Sweep across the waves, raise them to fury!” And the tempests answered, their voices fierce, Lashing ships, tearing sails, and drowning men.

Brave Odysseus, wanderer of the wine-dark sea, Fought against Aeolus’s wrathful winds. His ship tossed like a leaf in the maelstrom, As Poseidon’s rage echoed through the abyss.

Yet Aeolus, not unkind, relented at last, Binding the winds once more in their leathern bags. “Go forth,” he said, “but beware your hubris, For even gods cannot tame the tempest forever.”

And so Odysseus sailed on, his heart aflame, Seeking Ithaca’s shores, his homeland’s embrace. The winds whispered secrets in his ears, Of sirens, cyclops, and the wrath of gods.

Advertisement

Sing, O Muse, of Aeolus’s ancient song, Of winds unbridled and heroes undaunted. For in nature’s fury, we find both peril and wonder, And the echoes of epic tales carried on the breeze.

In short, you want to include as much detail and context as you can to your query. If you have a specific tone, length, or style in mind, or if you have writing samples and background text, be sure to include those. The more information you can give the AI at the start, the more accurate its responses will be (which translates into less time spent refining and reiterating your idea).

How to create images in Copilot

a frog playing violin in a packed concert hall, pixel art, AI generated
Microsoft

Having Copilot generate images is no different than having it generate written responses. Be sure to be clear in what you’re asking the system for and avoid using subjective terms like “good” or “pretty” (since the AI doesn’t actually understand what those words mean, just how likely they are to appear next in a text string).

So, rather than “draw a picture of a pretty frog playing music at a concert,” try “draw an image of a frog in a top hat and tuxedo playing the violin in a packed concert hall, photorealistic.” The AI will initially return four potential images. You can then pick any one of them to continue further refining (“make its bow tie red”) and reiterating (“remake the image as pixel art”) until you achieve the image output you’re looking for.

How to manage your Copilot account

Not all chats are worth archiving. In order to eliminate one or more chat sessions from your Recents list, hover your pointer over the chat in question, then click on the trashcan icon to delete it.

Advertisement

If you need to wipe your history entirely, sign in to the privacy dashboard with your personal Microsoft account. You can access that by clicking the Settings icon > Privacy from the menu. Go to Browsing and Search > Manage your activity data. Open the Copilot activity history section, and select “Clear all Copilot activity history and search history.”

Copilot controversies

It should go without saying at this point that you shouldn’t implicitly trust what a chatbot tells you. AI have a tendency to “hallucinate” facts and figures in their responses, hallucinations that just got one expert witness in seriously hot water with the courts. The witness, Charles Ranson, ran afoul of New York State Unified Court judge Jonathan Schopf for using Copilot to inaccurately verify his damage calculations in a dispute involving a Bahamas rental property, which was included in the deceased owner’s trust to his son. The case depended, at least in part, on his accurate appraisal.

Schopf found that Ranson’s testimony was “entirely speculative” and did not take into account obvious factors like real estate taxes and the economic effects of the COVID-19 pandemic. “Ranson was adamant in his testimony that the use of Copilot or other artificial intelligence tools, for drafting expert reports is generally accepted in the field of fiduciary services and represents the future of analysis of fiduciary decisions,” Schopf wrote. “However, he could not name any publications regarding its use or any other sources to confirm that it is a generally accepted methodology.”


Advertisement





Source link

Continue Reading
Advertisement
Click to comment

You must be logged in to post a comment Login

Leave a Reply

Technology

Google wants to put the consequences of its Epic antitrust ruling on pause during appeal

Published

on

Google wants to put the consequences of its Epic antitrust ruling on pause during appeal

Google has formally filed a motion [PDF] asking the 9th Circuit Court of Appeals to put a pause on the order that forces the company to open the Play store to competitors. If you’ll recall, Google lost an antitrust lawsuit filed by Epic Games after a federal jury found that the company held an illegal monopoly on app distribution and in-app billing services for Android devices. Earlier this month, US District Judge James Donato ordered Google to allow third-party app stores access to the Google Play app catalog and to make those stores downloadable from its storefront. Now, Google is asking the court for a stay on that order while it’s appealing the Epic antitrust lawsuit decision, saying that it will expose 100 million Android users in the US to “substantial new security risks.”

The company called the order “harmful and unwarranted” and said that if it’s allowed to stand, it will threaten Google’s ability to “provide a safe and trusted used experience.” It argued that if it makes third-party app stores available for download from Google Play, people might think that the company is vouching for them, which could raise “real risks for [its] users.” Those app stores could have “less rigorous protections,” Google explained, that could expose users to harmful and malicious apps.

It also said that giving third-party stores access to the Play catalog could harm businesses that don’t want their products available alongside inappropriate or malicious content. Giving third-party stores access to its entire library could give “bad-intentioned” stores a “veneer of legitimacy.” Moreover, it argued that allowing developers to link out from their apps “creates significant risk of deceptive links,” since bad actors could use the feature for phishing attacks to compromise users’ devices and steal their data.

One of court’s main proposed changes is to allow developers to remove Google Play billing as an option, allowing them to offer their apps to Android users without having to pay the company a commission. However, Google said that by allowing developers to remove its billing system, it could “force an option that may not have the safeguards and features that users expect.”

Advertisement

In its filing, Google emphasized that the three weeks the court gave it to make these sweeping changes is too short for a “Herculean task.” It creates an “unacceptable risk of safety” that could lead to major issues affecting the functionality of users’ Android devices, it said. The company also questioned why the court sided with Epic in its antitrust lawsuit, whereas it sided with Apple in a similar case also filed by the video game company. “It is pause-inducing that Apple, which requires all apps go through its proprietary App Store, is not a monopolist, but Google — which built choice into the Android operating system so device makers can preinstall and users can download competing app stores — was condemned for monopolization.”

Source link

Continue Reading

Technology

Nvidia just dropped a new AI model that crushes OpenAI’s GPT-4—no big launch, just big results

Published

on

Nvidia just dropped a new AI model that crushes OpenAI’s GPT-4—no big launch, just big results

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Nvidia quietly unveiled a new artificial intelligence model on Tuesday that outperforms offerings from industry leaders OpenAI and Anthropic, marking a significant shift in the company’s AI strategy and potentially reshaping the competitive landscape of the field.

The model, named Llama-3.1-Nemotron-70B-Instruct, appeared on the popular AI platform Hugging Face without fanfare, quickly drawing attention for its exceptional performance across multiple benchmark tests.

Nvidia reports that their new offering achieves top scores in key evaluations, including 85.0 on the Arena Hard benchmark, 57.6 on AlpacaEval 2 LC, and 8.98 on the GPT-4-Turbo MT-Bench.

Advertisement

These scores surpass those of highly regarded models like OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, catapulting Nvidia to the forefront of AI language understanding and generation.

Nvidia’s AI gambit: From GPU powerhouse to language model pioneer

This release represents a pivotal moment for Nvidia. Known primarily as the dominant force in graphics processing units (GPUs) that power AI systems, the company now demonstrates its capability to develop sophisticated AI software. This move signals a strategic expansion that could alter the dynamics of the AI industry, challenging the traditional dominance of software-focused companies in large language model development.

Nvidia’s approach to creating Llama-3.1-Nemotron-70B-Instruct involved refining Meta’s open-source Llama 3.1 model using advanced training techniques, including Reinforcement Learning from Human Feedback (RLHF). This method allows the AI to learn from human preferences, potentially leading to more natural and contextually appropriate responses.

With its superior performance, the model has the potential to offer businesses a more capable and cost-efficient alternative to some of the most advanced models on the market.

Advertisement

The model’s ability to handle complex queries without additional prompting or specialized tokens is what sets it apart. In a demonstration, it correctly answered the question “How many r’s are in strawberry?” with a detailed and accurate response, showcasing a nuanced understanding of language and an ability to provide clear explanations.

What makes these results particularly significant is the emphasis on “alignment,” a term in AI research that refers to how well a model’s output matches the needs and preferences of its users. For enterprises, this translates into fewer errors, more helpful responses, and ultimately, better customer satisfaction.

How Nvidia’s new model could reshape business and research

For businesses and organizations exploring AI solutions, Nvidia’s model presents a compelling new option. The company offers free hosted inference through its build.nvidia.com platform, complete with an OpenAI-compatible API interface.

This accessibility makes advanced AI technology more readily available, allowing a broader range of companies to experiment with and implement advanced language models.

Advertisement

The release also highlights a growing shift in the AI landscape toward models that are not only powerful but also customizable. Enterprises today need AI that can be tailored to their specific needs, whether that’s handling customer service inquiries or generating complex reports. Nvidia’s model offers that flexibility, along with top-tier performance, making it a compelling option for businesses across industries.

However, with this power comes responsibility. Like any AI system, Llama-3.1-Nemotron-70B-Instruct is not immune to risks. Nvidia has cautioned that the model has not been tuned for specialized domains like math or legal reasoning, where accuracy is critical. Enterprises will need to ensure they are using the model appropriately and implementing safeguards to prevent errors or misuse.

The AI arms race heats up: Nvidia’s bold move challenges tech giants

Nvidia’s latest model release signals just how fast the AI landscape is shifting. While the long-term impact of Llama-3.1-Nemotron-70B-Instruct remains uncertain, its release marks a clear inflection point in the competition to build the most advanced AI systems.

By moving from hardware into high-performance AI software, Nvidia is forcing other players to reconsider their strategies and accelerate their own R&D. This comes on the heels of the company’s introduction of the NVLM 1.0 family of multimodal models, including the 72-billion-parameter NVLM-D-72B.

Advertisement

These recent releases, particularly the open-source NVLM project, have shown that Nvidia’s AI ambitions go beyond just competing—they are challenging the dominance of proprietary systems like GPT-4o in areas ranging from image interpretation to solving complex problems.

The rapid succession of these releases underscores Nvidia’s ambitious push into AI software development. By offering both multimodal and text-only models that compete with industry leaders, Nvidia is positioning itself as a comprehensive AI solutions provider, leveraging its hardware expertise to create powerful, accessible software tools.

Nvidia’s strategy seems clear: it’s positioning itself as a full-service AI provider, combining its hardware expertise with accessible, high-performance software. This move could reshape the industry, pushing rivals to innovate faster and potentially sparking more open-source collaboration across the field.

As developers test Llama-3.1-Nemotron-70B-Instruct, we’re likely to see new applications emerge across sectors like healthcare, finance, education, and beyond. Its success will ultimately depend on whether it can turn impressive benchmark scores into real-world solutions.

Advertisement

In the coming months, the AI community will closely watch how Llama-3.1-Nemotron-70B-Instruct performs in real-world applications beyond benchmark tests. Its ability to translate high scores into practical, valuable solutions will ultimately determine its long-term impact on the industry and society at large.

Nvidia’s deeper dive into AI model development has intensified the competition. If this is the beginning of a new era in artificial intelligence, it’s one where fully integrated solutions may set the pace for future breakthroughs.


Source link
Continue Reading

Technology

After selling Drift, ex-HubSpot exec launches AI for customer success managers

Published

on

After selling Drift, ex-HubSpot exec launches AI for customer success managers

Elias Torres has achieved a lot for somebody who immigrated to the US from Nicaragua at 17 without knowing any English. He served as a VP of engineering at HubSpot before co-founding Drift, a company that sold to Vista Equity for about $1.2 billion in 2021.

“It’s very rare to get this far, but I’m not done,” Torres told TechCrunch.

About a year ago, Torres (pictured above) founded Agency, an AI-powered startup designed to automate tasks traditionally handled by customer success managers (CSMs). These professionals provide personalized support to users of complex B2B software, ranging from onboarding and training to upselling new features.

On Wednesday, Agency is coming out of stealth and announcing that it raised a $12 million seed round led by Sequoia and HubSpot Ventures.

Advertisement

The idea for Agency was born when Torres started consulting for OpenAI in early 2023. The ChatGPT maker asked Torres for help developing AI solutions for some of their enterprise customers, including NBA and LiveNation. In the course of doing that, it occurred to Torres that companies could benefit from AI customer success managers.

He was encouraged to build a startup around this concept when he met with Brian Halligan, co-founder and executive chairman of HubSpot. “We worked together on the CRM at HubSpot, and he told me, ‘Let’s build something great together again,’” Torres said about his conversation with Halligan. (Halligan joined Agency’s board.)

Shortly after meeting with Halligan, Torres reached out to Sequoia partner Pat Grady, who had previously invested in Drift. Grady was instantly sold on the idea.

“It’s hard to hire great CSMs. It’s hard to scale great CSMs,” Grady told TechCrunch. “If you have a product that can do a lot of the work on their behalf, and you can scale your company without having to hire an army of CSMs. That’s pretty useful.”

Advertisement

Agency can free up time in the customer success manager’s workday by handling tasks such as scheduling, follow-ups, note-taking, customer onboarding, and meeting preparation.

Torres explained that Agency’s AI gains a deep understanding of each customer from emails, CRM data, chat messages, and phone conversations, which allows it to anticipate customer needs at any point.

“This is something that we’ve been dreaming about for a long time,” he said, adding that this was the goal of Salesforce, the CRM he helped build at HubSpot and Drift, which was building personalized conversations for salespeople. “We didn’t have the technology to do this until now.”

The company’s product is currently being tested with companies, including HeyGen, and is available in an invite-only beta for customer success professionals.

While Agency seems to have no direct competitors at present, another function within the sales and marketing organization, sales development representative, is facing disruption from dozens of AI-powered solutions.

Advertisement

“I don’t know anybody else going after this market,” Sequoia’s Grady said. “Hopefully people won’t discover that for a while, and they’ll have a little bit of room to run.”

Source link

Continue Reading

Technology

Google asks 9th Circuit for emergency stay, says Epic ruling ‘is dangerous’

Published

on

Epic v. Google: everything we’re learning live in Fortnite court

The ruling, which Google has appealed, would force Google to distribute third-party app stores within Google Play, no longer require Google Play Billing for apps distributed via Google Play, and more, with many of those changes ordered to begin on November 1st — just over two weeks from today.

But echoing many of Google’s arguments during the district court case, which Judge Donato rejected as insufficient, the company now argues that the order “threatens Google Play’s ability to provide a safe and trusted user experience.”

“This wouldn’t just hurt Google – this would have negative consequences for Android users, developers and device manufacturers who have built thriving businesses on Android, writes Google’s Lee-Anne Mulholland, VP of regulatory affairs, in a fact sheet distributed to journalists.

The fact sheet is bulleted into five different sections, and the section headers give you an idea of Google’s objections:

Advertisement

To get a sense of Google’s actual filing with the court, here’s how it begins:

At the request of a single competitor, Epic Games, the District Court ordered extensive redesigns to Play that will expose 100-million-plus U.S. users of Android devices to substantial new security risks and force fundamental changes to Google’s contractual and business relationships with hundreds of thousands of Google partners. The court gave Google just three weeks to make many of these sweeping changes—a Herculean task creating an unacceptable risk of safety and security failures within the Android ecosystem.

You can read the whole fact sheet, and Google’s whole emergency motion, below.

Source link

Continue Reading

Technology

4 ways you can use ChatGPT’s Canvas mode to improve your daily life

Published

on

ChatGPT Canvas

ChatGPT became much more collaborative when OpenAI released Canvas mode for the AI chatbot earlier in October. Switching to Canvas mode provides a more flexible way to create and edit text. Through AI’s code-writing ability, it enables more complex, long-term planning with visualization, spot-editing, and even automation.

Despite OpenAI’s bragging about how practical this approach to ChatGPT can be, you might stare at that prompt and ask how to use ChatGPT’s Canvas mode to enhance your daily life.

Source link

Continue Reading

Technology

SpaceX to top the Super Heavy catch with another astonishing feat

Published

on

SpaceX to top the Super Heavy catch with another astonishing feat

SpaceX achieved a spectacular first on Sunday when it used a pair of giant mechanical arms to catch the 70-meter-tall Super Heavy booster just minutes after it deployed the Starship spacecraft to orbit in the vehicle’s fifth test flight.

But SpaceX isn’t stopping there. As part of its efforts to create a fully reusable spaceflight system for the Starship — comprising the first-stage Super Heavy booster and the upper-stage Starship spacecraft — SpaceX will attempt to catch not only the booster, but also the spacecraft.

SpaceX CEO Elon Musk confirmed the plan for the world’s most powerful rocket in a post on X ( formally Twitter) on Wednesday, saying, “Hopefully early next year, we will catch the ship too.”

Before then, SpaceX will want to carry out more test flights of the Starship in which it will continue to catch the Super Heavy, while the Starship will continue to come down in the ocean, as it did in Sunday’s test flight.

Catching the Starship back at the launch base will allow for a faster turnaround time between launches, with the spacecraft only needing to be checked, refurbished, and refueled before being lifted atop a Super Heavy for another flight.

Advertisement

SpaceX also has to perfect a landing system for the Starship that involves it touching down on the ground in a vertical position, as this is how it will arrive on other celestial bodies such as the moon and possibly Mars (at least until any launch and landing infrastructure can be built).

It’s actually already achieved such a landing in Earth-based tests several years ago, but those touchdowns involved shorter “hops” into the atmosphere rather than more complex orbital flights.

It’s certainly an exciting time for SpaceX engineers as they put much of their attention into the continued development of the Starship.

NASA is planning to use SpaceX’s spacecraft to put two astronauts on the lunar surface in the Artemis III mission, which is currently scheduled for 2026, so there is much work to be done.

Advertisement






Source link

Continue Reading

Trending

Copyright © 2024 WordupNews.com