In its initial demo, Google's AI chatbot Bard commits a factual error.
Google's AI chatbot Bard, a competitor to OpenAI's ChatGPT, was unveiled on Monday and is slated to become "more freely available to the public in the coming weeks." However, experts have noted that Bard made a factual error in its very first demo, so the bot isn't off to a fantastic start.
What new discoveries from the James Webb Space Telescope can I share with my 9-year-old? Bard responds in a GIF posted by Google. The telescope "took the very first photos of a planet outside of our own solar system," according to one of Bard's three bullet points.
On Twitter, astronomers pointed out that this is false and that, as stated on NASA's website, the first photograph of an exoplanet was actually captured in 2004.
Grant Tremblay, an astrophysicist, tweeted, "For the record: JWST did not snap 'the very first image of a planet outside our solar system.' I'm sure Bard will be stunning.
The error was also noted by Bruce Macintosh, the director of the UC Santa Cruz Observatories. It feels like you should find a better example, in my opinion, as I imaged an exoplanet 14 years before JWST was put into operation, he tweeted.
"I do enjoy and appreciate that one of the most powerful corporations on the globe is utilizing a JWST search to advertise their LLM," Tremblay continued in a subsequent tweet. Awesome! But despite seeming eerily impressive, ChatGPT, etc., are frequently *very confidently* incorrect. It will be interesting to watch if LLMs eventually self-correct.
Tremblay points out that one of the main issues with AI chatbots like ChatGPT and Bard is their propensity to firmly assert false information as fact. Since the systems are essentially autocomplete systems, they frequently "hallucinate," or make up information.
They are trained on enormous corpora of text and examine patterns to determine which word follows the next in any given sentence rather than querying a database of empirically supported facts to find answers. They are probabilistic rather than deterministic, which has led one well-known AI professor to refer to them as "bullshit generators."
Although there is already a lot of inaccurate and misleading material on the internet, Microsoft and Google's ambition to exploit their products as search engines has made the problem worse. The chatbots' responses there assume the authority of a self-proclaimed all-knowing machine. http://sentrateknikaprima.com/
By putting the onus of responsibility on the user, Microsoft, which yesterday demonstrated its new Bing search engine driven by AI, has attempted to address these concerns. The disclaimer from the firm reads, "Bing is powered by AI, so surprises and blunders are conceivable." Verify the information, and provide feedback so we can grow and learn.
"This underlines the significance of a thorough testing process, something we're kicking off this week with our Trusted Tester program," a Google representative told The Verge. To ensure that Bard's responses uphold a high standard for quality, safety, and information founded in real-world data, we'll combine external feedback with our own internal testing. https://ejtandemonium.com/
Google's AI chatbot Bard, a competitor to OpenAI's ChatGPT, was unveiled on Monday and is slated to become "more freely available to the public in the coming weeks." However, experts have noted that Bard made a factual error in its very first demo, so the bot isn't off to a fantastic start.
What new discoveries from the James Webb Space Telescope can I share with my 9-year-old? Bard responds in a GIF posted by Google. The telescope "took the very first photos of a planet outside of our own solar system," according to one of Bard's three bullet points.
On Twitter, astronomers pointed out that this is false and that, as stated on NASA's website, the first photograph of an exoplanet was actually captured in 2004.
Grant Tremblay, an astrophysicist, tweeted, "For the record: JWST did not snap 'the very first image of a planet outside our solar system.' I'm sure Bard will be stunning.
The error was also noted by Bruce Macintosh, the director of the UC Santa Cruz Observatories. It feels like you should find a better example, in my opinion, as I imaged an exoplanet 14 years before JWST was put into operation, he tweeted.
"I do enjoy and appreciate that one of the most powerful corporations on the globe is utilizing a JWST search to advertise their LLM," Tremblay continued in a subsequent tweet. Awesome! But despite seeming eerily impressive, ChatGPT, etc., are frequently *very confidently* incorrect. It will be interesting to watch if LLMs eventually self-correct.
Tremblay points out that one of the main issues with AI chatbots like ChatGPT and Bard is their propensity to firmly assert false information as fact. Since the systems are essentially autocomplete systems, they frequently "hallucinate," or make up information.
They are trained on enormous corpora of text and examine patterns to determine which word follows the next in any given sentence rather than querying a database of empirically supported facts to find answers. They are probabilistic rather than deterministic, which has led one well-known AI professor to refer to them as "bullshit generators."
Although there is already a lot of inaccurate and misleading material on the internet, Microsoft and Google's ambition to exploit their products as search engines has made the problem worse. The chatbots' responses there assume the authority of a self-proclaimed all-knowing machine. http://sentrateknikaprima.com/
By putting the onus of responsibility on the user, Microsoft, which yesterday demonstrated its new Bing search engine driven by AI, has attempted to address these concerns. The disclaimer from the firm reads, "Bing is powered by AI, so surprises and blunders are conceivable." Verify the information, and provide feedback so we can grow and learn.
"This underlines the significance of a thorough testing process, something we're kicking off this week with our Trusted Tester program," a Google representative told The Verge. To ensure that Bard's responses uphold a high standard for quality, safety, and information founded in real-world data, we'll combine external feedback with our own internal testing. https://ejtandemonium.com/