<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
                    <title>Consumer Electronics News - Electronics News, Electronic Gadget News | Consumer Electronics |Electronic Gadgets </title>
            <link>https://techxplore.com/rss-feed/consumer-gadgets-news/</link>
            <language>en-us</language>
            <description>Tthe latest news on consumer electronics, electronic gadgets and electronics. </description>

                            <item>
                    <title>AI model predicts human attention in 360-degree videos using both sound and vision</title>
                    <description>Virtual reality (VR) experiences and 360-degree videos are transforming viewers from passive observers into active participants immersed within a scene. Yet this shift raises an important question: Where do people direct their attention in such environments, and what shapes that attention?</description>
                    <link>https://techxplore.com/news/2026-04-ai-human-attention-degree-videos.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 22 Apr 2026 16:00:01 EDT</pubDate>
                    <guid isPermaLink="false">news696092041</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-model-predicts-huma.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI can give as good as it gets—or better: The moral dilemma of combative chatbots</title>
                    <description>AI systems can &quot;learn to seek revenge&quot; because they are able to grasp reciprocating verbal violence when exposed to conflict, new research from Lancaster University shows. In short, AI can give as good as it gets and, eventually, go one step further.</description>
                    <link>https://techxplore.com/news/2026-04-ai-good-moral-dilemma-combative.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 22 Apr 2026 09:40:01 EDT</pubDate>
                    <guid isPermaLink="false">news696069183</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-chatbot-argument.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Chatbots may fuel &#039;delusional spirals&#039; that lead to real-world harm</title>
                    <description>Perhaps to the surprise of their creators, large language models have become confidants, therapists, and, for some, intimate partners to real human users. In a new study, AI researchers at Stanford studied verbatim transcripts of 19 real conversations between humans and chatbots to understand how these relationships arise, evolve, and, too often, devolve into troubling outcomes the researchers describe as &quot;delusional spirals.&quot;</description>
                    <link>https://techxplore.com/news/2026-04-ai-relationships-trigger-delusional-spirals.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 21 Apr 2026 11:00:10 EDT</pubDate>
                    <guid isPermaLink="false">news695986862</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/tangled-mind.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>From Siri to scams, AI voice clones now beat human speech in noisy settings</title>
                    <description>Synthetic voices are increasingly a part of our lives, from digital assistants like Siri and Alexa to automated telemarketers and answering machines. With the expansion of generative AI, a new type of synthetic voice has been developed: voice clones, which can recreate a facsimile of a person&#039;s voice from only a few seconds of recorded speech.</description>
                    <link>https://techxplore.com/news/2026-04-siri-scams-ai-voice-clones.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 21 Apr 2026 11:00:06 EDT</pubDate>
                    <guid isPermaLink="false">news695895061</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-voices-are-easier-t-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI model simulates smartphone muscle effort, revealing which swipes are most tiring</title>
                    <description>Prolonged scrolling is bad for your well-being, but is it also physically tiring? Until now, we haven&#039;t really been able to say. This is why researchers from Aalto and Leipzig Universities created a new AI model that makes it possible to simulate muscle activations and required energy to work out how physically effortful smartphone interactions are for users.</description>
                    <link>https://techxplore.com/news/2026-04-ai-simulates-smartphone-muscle-effort.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 13 Apr 2026 13:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news695304001</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/tired-of-swiping-now-a.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Deep-tech company develops high-precision passive eye-tracking technology for smart contact lenses</title>
                    <description>XPANCEO, a deep-tech company developing smart contact lenses, has unveiled a passive eye-tracking system that achieves industry-level measurement precision using standard cameras. The system employs microscopic patterns embedded in contact lenses that enable high-accuracy passive gaze tracking without requiring active electronics or dedicated power sources.</description>
                    <link>https://techxplore.com/news/2026-04-deep-tech-company-high-precision.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 08 Apr 2026 12:40:03 EDT</pubDate>
                    <guid isPermaLink="false">news694869662</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/xpanceo-develops-high.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Sonar on stock smartwatches leads to hand-tracking advancement</title>
                    <description>Imagine tapping your thumb and index finger together twice to skip to the next song or clicking around your laptop or desktop computer without a mouse, using discreet finger motions. New first-of-its-kind wearable technology from researchers at Cornell and KAIST, in South Korea, brings that vision closer to reality. The system, called WatchHand, equips off-the-shelf smartwatches with AI-powered micro sonar capable of tracking hand movements.</description>
                    <link>https://techxplore.com/news/2026-04-sonar-stock-smartwatches-tracking-advancement.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 06 Apr 2026 12:40:05 EDT</pubDate>
                    <guid isPermaLink="false">news694696262</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/sonar-on-stock-smartwa-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Explainability is a must for older adults to trust AI, study shows</title>
                    <description>Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren&#039;t likely to trust them. That&#039;s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).</description>
                    <link>https://techxplore.com/news/2026-04-older-adults-ai.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 06 Apr 2026 11:20:03 EDT</pubDate>
                    <guid isPermaLink="false">news694691882</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2020/3-alexa.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI companions can comfort lonely users but may deepen distress over time</title>
                    <description>AI companions are always available, never judge, never tire and never demand anything in return. If someone is struggling with loneliness, this frictionlessness can seem profoundly appealing. However, new research shows that in the long term, seeking emotional support from an AI companion can pull users away from important human relationships.</description>
                    <link>https://techxplore.com/news/2026-03-ai-companions-comfort-lonely-users.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Sat, 04 Apr 2026 12:00:06 EDT</pubDate>
                    <guid isPermaLink="false">news694177965</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-companions-can-comf-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI overly affirms users asking for personal advice, study finds</title>
                    <description>In a new study published in Science, Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when users described harmful or illegal behavior, the models often affirmed their choices.</description>
                    <link>https://techxplore.com/news/2026-03-ai-overly-affirms-users-personal.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Thu, 26 Mar 2026 14:00:18 EDT</pubDate>
                    <guid isPermaLink="false">news693667021</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/deepseek-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Asking AI to act like an expert can make it less reliable</title>
                    <description>To get the best out of AI, some users tell it to provide answers as if it were an expert. Others ask it to adopt a persona, such as a safety monitor, to guide its responses. However, this approach can sometimes hurt performance, according to a study available on the arXiv preprint server.</description>
                    <link>https://techxplore.com/news/2026-03-ai-expert-reliable.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 25 Mar 2026 16:30:04 EDT</pubDate>
                    <guid isPermaLink="false">news693673246</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-takes-on-the-person.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>When smell meets VR: Scent technology blends up to 8 fragrances for immersive virtual experiences</title>
                    <description>A multi-channel wearable scent display developed at Institute of Science Tokyo allows a user to experience multiple scents while exploring virtual environments. Based on virtual scenes, the device can blend up to eight fragrances in real time and deliver them with precise control of odor intensity. By synchronizing smell with virtual reality content, the device enables better immersion and realism, opening new possibilities for enhanced digital entertainment, realistic simulation training, and future digital scent technologies.</description>
                    <link>https://techxplore.com/news/2026-03-vr-scent-technology-blends-fragrances.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 25 Mar 2026 13:40:05 EDT</pubDate>
                    <guid isPermaLink="false">news693653259</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/when-smell-meets-virtu.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Fragmented phone use—not total screen time—is the main driver of information overload, study finds</title>
                    <description>Amid hot discussion on screen time, social media use and the impact of digital devices on our well-being, a seven-month study from Aalto University in Finland sheds new light on what overwhelms users the most—and the results aren&#039;t what you might think.</description>
                    <link>https://techxplore.com/news/2026-03-fragmented-total-screen-main-driver.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 24 Mar 2026 13:20:03 EDT</pubDate>
                    <guid isPermaLink="false">news693575281</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/fragmented-phone-use.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>LLMs and creativity: AI responses show less variety than human ones</title>
                    <description>Can using a large language model (LLM) make a person more creative? Prior work has shown that using LLMs can make creative outputs more homogeneous, but this homogenization could stem from the specific LLM used or from widespread use of the same model.</description>
                    <link>https://techxplore.com/news/2026-03-llms-creativity-ai-responses-variety.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 24 Mar 2026 11:40:07 EDT</pubDate>
                    <guid isPermaLink="false">news693565572</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/llms-and-creativity-ai.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>SoulMate LLM accelerator evolves according to the specific characteristics of the user</title>
                    <description>While large language models (LLMs) like ChatGPT are adept at answering countless questions, they often remain unaware of a user&#039;s minor habits or previous conversational contexts. This is why AI, despite being deeply integrated into our daily lives, can still feel like a &quot;stranger.&quot; Overcoming these limitations, researchers at KAIST, led by Professor Hoi-Jun Yoo from the Graduate School of AI Semiconductors, have developed the world&#039;s first AI semiconductor, dubbed &quot;SoulMate,&quot; which learns and adapts to a user&#039;s speech style, preferences, and emotions in real-time—becoming a true &quot;digital soulmate.&quot;</description>
                    <link>https://techxplore.com/news/2026-03-soulmate-llm-evolves-specific-characteristics.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Mar 2026 09:40:06 EDT</pubDate>
                    <guid isPermaLink="false">news693044581</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/soulmate-llm-accelerat.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI chatbots&#039; tendency to always agree may reinforce delusions in vulnerable users</title>
                    <description>The integration of large language model-based AI chatbots into multiple facets of our everyday lives has opened us up to advantages that would have been considered impossible even a decade ago. The same development has, however, opened us up to unforeseen risks, including the impact that engaging with AI chatbots can have on people dealing with mental illness.</description>
                    <link>https://techxplore.com/news/2026-03-ai-chatbots-tendency-delusions-vulnerable.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 17 Mar 2026 11:00:09 EDT</pubDate>
                    <guid isPermaLink="false">news692963624</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2024/delusion.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Report calls for AI toy safety standards to protect young children</title>
                    <description>AI-powered toys that &quot;talk&quot; with young children should be more tightly regulated and carry new safety kitemarks, according to a report that warns they are not always developed with children&#039;s psychological safety in mind. The recommendation appears in the initial report from &quot;AI in the Early Years&quot;: a University of Cambridge project and the first systematic study of how Generative AI (GenAI) toys capable of human-like conversation may influence development in the critical years up to age five.</description>
                    <link>https://techxplore.com/news/2026-03-ai-toy-safety-standards-young.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Thu, 12 Mar 2026 20:00:02 EDT</pubDate>
                    <guid isPermaLink="false">news692529337</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/report-calls-for-ai-to.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI assistants can sway writers&#039; attitudes, even when they&#039;re watching for bias, experiments indicate</title>
                    <description>Artificial intelligence-powered writing tools such as autocomplete suggestions can definitely change the way people express themselves, but can they also change how they think? Cornell Tech researchers think so.</description>
                    <link>https://techxplore.com/news/2026-03-ai-sway-writers-attitudes-theyre.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 11 Mar 2026 14:00:11 EDT</pubDate>
                    <guid isPermaLink="false">news692352961</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2019/1-typing.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Gray screens and loading delays cut gaming time by 30%</title>
                    <description>You know it&#039;s time to put your phone down, but your thumb finds &quot;Play Again&quot; once more. In an age where digital entertainment never sleeps, willpower alone isn&#039;t enough. As more players, especially the younger generations, face physical and mental health challenges from excessive gaming, ethical design that prioritizes human well-being during development has become more urgent.</description>
                    <link>https://techxplore.com/news/2026-03-gray-screens-delays-gaming.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 03 Mar 2026 11:20:04 EST</pubDate>
                    <guid isPermaLink="false">news691758361</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/gray-screens-and-loadi.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>HEART benchmark assesses ability of LLMs and humans to offer emotional support</title>
                    <description>Large language models (LLMs), artificial intelligence (AI) systems that can process human language and generate texts in response to specific user queries, are now used daily by a growing number of people worldwide. While initially these models were primarily used to quickly source information or produce texts for specific uses, some people have now also started approaching the models with personal issues or concerns.</description>
                    <link>https://techxplore.com/news/2026-02-heart-benchmark-ability-llms-humans.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 24 Feb 2026 11:40:01 EST</pubDate>
                    <guid isPermaLink="false">news691063660</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/new-tool-to-assess-the.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>How eyes affect our perception of a humanoid robot&#039;s mind</title>
                    <description>Eyes are said to be the mirror of the soul. Eyes and gaze direction guide attention, evoke emotions and activate the brain&#039;s social perception mechanisms. Researchers at Tampere University and the University of Bremen conducted a study examining how people perceive the minds of humanoid robots. Mind perception refers to the way humans detect and infer that other people, beings or even objects possess consciousness, emotions and cognitive states.</description>
                    <link>https://techxplore.com/news/2026-02-eyes-affect-perception-humanoid-robot.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Sat, 21 Feb 2026 11:00:05 EST</pubDate>
                    <guid isPermaLink="false">news690562316</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/eyes-can-affect-our-pe.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI chatbots provide less-accurate information to vulnerable users, study shows</title>
                    <description>Large language models (LLMs) have been championed as tools that could democratize access to information worldwide, offering knowledge in a user-friendly interface regardless of a person&#039;s background or location. However, new research from MIT&#039;s Center for Constructive Communication (CCC) suggests these artificial intelligence systems may actually perform worse for the very users who could most benefit from them.</description>
                    <link>https://techxplore.com/news/2026-02-ai-chatbots-accurate-vulnerable-users.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Fri, 20 Feb 2026 13:20:01 EST</pubDate>
                    <guid isPermaLink="false">news690808412</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/study-ai-chatbots-prov.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Most AI bots lack basic safety disclosures, study finds</title>
                    <description>Many people use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports. However, a new study of the &quot;AI agent ecosystem&quot; suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is &quot;dangerously lagging.&quot;</description>
                    <link>https://techxplore.com/news/2026-02-ai-bots-lack-basic-safety.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Thu, 19 Feb 2026 20:00:06 EST</pubDate>
                    <guid isPermaLink="false">news690710462</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/ai-chatbot-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Laughter reveals how we use AI at home</title>
                    <description>Voice assistants such as Alexa are often marketed as smart tools that streamline everyday life. But once the technology moves into people&#039;s homes, interest quickly fades. This is shown by new research in which laughter is used as a key to understanding how people actually use—and understand—artificial intelligence in everyday life. The paper is published in the Proceedings of the 13th International Conference on Human-Agent Interaction.</description>
                    <link>https://techxplore.com/news/2026-02-laughter-reveals-ai-home.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 11:02:32 EST</pubDate>
                    <guid isPermaLink="false">news690634922</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/laughter-reveals-how-w.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>People are overconfident about spotting AI faces, study finds</title>
                    <description>Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated. With AI-generated faces now almost impossible to distinguish from real ones, this misplaced confidence could make individuals and organizations more vulnerable to scammers, fraudsters and bad actors, the researchers warn.</description>
                    <link>https://techxplore.com/news/2026-02-people-overconfident-ai.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 11:00:49 EST</pubDate>
                    <guid isPermaLink="false">news690634801</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/people-are-overconfide.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Personalization features can make LLMs more agreeable, potentially creating a virtual echo chamber</title>
                    <description>Many of the latest large language models (LLMs) are designed to remember details from past conversations or store user profiles, enabling these models to personalize responses. But researchers from MIT and Penn State University found that, over long conversations, such personalization features often increase the likelihood an LLM will become overly agreeable or begin mirroring the individual&#039;s point of view.</description>
                    <link>https://techxplore.com/news/2026-02-personalization-features-llms-agreeable-potentially.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 09:40:02 EST</pubDate>
                    <guid isPermaLink="false">news690629803</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/personalization-featur-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>LLMs violate boundaries during mental health dialogues, study finds</title>
                    <description>Artificial intelligence (AI) agents, particularly those based on large language models (LLMs) like the conversational platform ChatGPT, are now widely used daily by numerous people worldwide. LLMs can generate texts that are highly realistic, to the point that they could be sometimes mistaken for texts written by humans.</description>
                    <link>https://techxplore.com/news/2026-02-llms-violate-boundaries-mental-health.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Sun, 15 Feb 2026 10:50:01 EST</pubDate>
                    <guid isPermaLink="false">news690113534</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/llms-violate-boundarie.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>What chatbots can teach humans about empathy</title>
                    <description>Over half of U.S. adults are using large language models (LLMs)—such as ChatGPT, Gemini and Copilot—in some capacity. Whether using artificial intelligence to create grocery lists, turn oneself into a Muppets character or divulge one&#039;s deepest, darkest secrets, humans are relying more on AI models in their everyday lives, possibly because AI chatbots have been shown to generate responses that make people feel validated, seen and heard.</description>
                    <link>https://techxplore.com/news/2026-02-chatbots-humans-empathy.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 11 Feb 2026 13:01:50 EST</pubDate>
                    <guid isPermaLink="false">news690037261</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/illustrate-interaction.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI decision aids aren&#039;t neutral: Why some users become easier to mislead</title>
                    <description>Guidance based on artificial intelligence (AI) may be uniquely placed to foster biases in humans, leading to less effective decision making, say researchers, who found that people with a positive view of AI may be at higher risk of being misled by AI tools. The study, titled &quot;Examining Human Reliance on Artificial Intelligence in Decision Making,&quot; is published in Scientific Reports.</description>
                    <link>https://techxplore.com/news/2026-02-ai-decision-aids-neutral-users.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 09 Feb 2026 16:42:43 EST</pubDate>
                    <guid isPermaLink="false">news689877722</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/why-relying-on-ai-may.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>How much does chatbot bias influence users? A lot, it turns out</title>
                    <description>Customers are 32% more likely to buy a product after reading a review summary generated by a chatbot than after reading the original review written by a human. That&#039;s because large language models introduce bias, in this case a positive framing, in summaries. That, in turn, affects users&#039; behavior.</description>
                    <link>https://techxplore.com/news/2026-02-chatbot-bias-users-lot.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 09 Feb 2026 12:00:02 EST</pubDate>
                    <guid isPermaLink="false">news689860434</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/how-much-does-chatbot.jpg" width="90" height="90" />
                                    </item>
                        </channel>
</rss>