Microsoft’s Content Credentials vs Google’s Double Check: The advent of generative AI has benefited the technology sector. In its aftermath, however, it has opened a can of worms regarding authenticity and trust. For, how can you be certain that what Bard is telling you is accurate, or that an image was not generated by Bing AI? Concerns regarding AI trust are not likely to become irrelevant anytime soon. But companies like Microsoft and Google are working to address some of these while concurrently promoting their own AI services.
This article examines how Microsoft and Google are implementing these strategies to provide users with information about images and responses while maintaining a level of trust and openness.
Microsoft’s Content Credentials vs Google’s Double Check:
Microsoft and Google have adopted the concept of increased content transparency, but they have prioritized their efforts in separate areas.
Text versus Image Identification
Microsoft has recently added a new Content Credentials method that uses a digital watermark to determine if an image was created by Bing AI. Although the watermark will not be visible on the image itself, it will be readily identifiable by dedicated software, including the DALL.E-powered Bing AI image creator, and will include the creation’s history and source in its metadata.
Bard, Google’s AI chatbot, allows you to double-check its answers by clicking the ‘G’ icon at the bottom of its response. This icon, also known as the ‘Google it’ button, lists three related topics to your search query and can be used to determine whether the answers are reliable or questionable by comparing them to information that is readily available online.
What is the purpose of Content Credentials and Double Check?
The implementation of Bing AI’s image content credentials and Bard’s response double-check features are substantially distinct.
Similar to Adobe, Intel, and Sony, Bing’s Content Credentials use cryptographic technology to identify the metadata of an image and ascertain when and by whom it was created. These AI standards established by C2PA, or the Coalition for Content Provenance and Authority, are intended to increase the transparency of AI-generated images and user confidence.
Support for Content Credentials will be added to images created in Paint, which has a new AI feature called Cocreator, and Microsoft Designer, which both use AI to generate art and images.
Google’s Bard’s responses may not come inscribed with any form of digital watermark. But with its Double Check feature, you can see at a glance which portions of its response are supported by information from other reputable websites and which are not, allowing you to determine if the chatbot is ‘hallucinating’ its response.
Even though it is far from completion, it will at least enable users to evaluate the credibility of answers and conduct Google searches to learn more.
A move toward improved content transparency and credibility
These two features infuse a level of trust directly into the content, making it accessible to everyone regardless of who is accessing it.
Bard’s Double Check serves as a quality assurance metric, although it varies based on the individual inquiry. On the other hand, Microsoft’s Content Credentials will be imprinted on every image generated by Bing AI, allowing anyone with access to the image to determine that it was created by AI. In response to the increasing prevalence of deep fake and AI-based forgeries, Microsoft intends to curtail instances of fraud and ensure that its platform and services cannot be used for unethical purposes.
Although these measures may not be sufficient to make AI trustworthy and transparent on their own, when combined with modifications made to other platforms, they constitute a significant step in that direction.
How to validate an AI-generated image’s credibility on Bing
Once an image has been generated by Bing, whether through Bing Chat, Bing Image Creator, Paint, or Developer, its Content credentials will be displayed on the image’s preview or information page.
Content Credentials will include the image’s origin and history.
We discourage the use of chatbots to verify the credentials of the content because they are unreliable and may not be able to access the credentials with just the image.
Other, more sophisticated AI-image detectors (such as Adobe Photoshop) will do a better job at identifying the content credentials and identifying the image’s source.
How to confirm Bard’s responses
Likewise, Bard’s responses are simple to verify. To verify the responses, merely click the ‘G’ button at the bottom of the response.
Some sentences will be highlighted in green or brown based on whether similar content exists elsewhere on the Internet.
To learn more, please refer to our guide on Bard’s Double-check with Google It.
Consider some frequently inquired queries regarding the Content Credentials and Double Check features of Microsoft and Bard.
What other businesses and applications utilize content credentials?
In addition to Microsoft, companies such as Adobe, Sony, and Intel use content credentials to watermark and detect AI-generated images.
Does Bing have a feature for double-checking?
At present, no. Bing lacks a feature comparable to double-check, so you must independently verify the answers (from reputable sources) and use your best judgment when accepting results.
The incorporation of AI verification techniques into Bard and Bing AI represents a significant step toward fostering greater user confidence and transparency. Given the exponential growth of AI, these precautions are necessary to ensure that we are not carried away by the AI phenomenon as a whole and can place our trust where it is secure. We can’t place the genie back in the bottle, but we can learn to contain it.
This guide should have helped you understand the similarities and distinctions between Microsoft’s Content Credentials and Bard’s Double-Check response feature. until we meet again!