<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
                    <title>Consumer Electronics News - Electronics News, Electronic Gadget News | Consumer Electronics |Electronic Gadgets </title>
            <link>https://techxplore.com/rss-feed/consumer-gadgets-news/</link>
            <language>en-us</language>
            <description>Tthe latest news on consumer electronics, electronic gadgets and electronics. </description>

                            <item>
                    <title>AI overly affirms users asking for personal advice, study finds</title>
                    <description>In a new study published in Science, Stanford computer scientists showed that artificial intelligence large language models are overly agreeable, or sycophantic, when users solicit advice on interpersonal dilemmas. Even when users described harmful or illegal behavior, the models often affirmed their choices.</description>
                    <link>https://techxplore.com/news/2026-03-ai-overly-affirms-users-personal.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Thu, 26 Mar 2026 14:00:18 EDT</pubDate>
                    <guid isPermaLink="false">news693667021</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/deepseek-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Asking AI to act like an expert can make it less reliable</title>
                    <description>To get the best out of AI, some users tell it to provide answers as if it were an expert. Others ask it to adopt a persona, such as a safety monitor, to guide its responses. However, this approach can sometimes hurt performance, according to a study available on the arXiv preprint server.</description>
                    <link>https://techxplore.com/news/2026-03-ai-expert-reliable.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 25 Mar 2026 16:30:04 EDT</pubDate>
                    <guid isPermaLink="false">news693673246</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-takes-on-the-person.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>When smell meets VR: Scent technology blends up to 8 fragrances for immersive virtual experiences</title>
                    <description>A multi-channel wearable scent display developed at Institute of Science Tokyo allows a user to experience multiple scents while exploring virtual environments. Based on virtual scenes, the device can blend up to eight fragrances in real time and deliver them with precise control of odor intensity. By synchronizing smell with virtual reality content, the device enables better immersion and realism, opening new possibilities for enhanced digital entertainment, realistic simulation training, and future digital scent technologies.</description>
                    <link>https://techxplore.com/news/2026-03-vr-scent-technology-blends-fragrances.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 25 Mar 2026 13:40:05 EDT</pubDate>
                    <guid isPermaLink="false">news693653259</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/when-smell-meets-virtu.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Fragmented phone use—not total screen time—is the main driver of information overload, study finds</title>
                    <description>Amid hot discussion on screen time, social media use and the impact of digital devices on our well-being, a seven-month study from Aalto University in Finland sheds new light on what overwhelms users the most—and the results aren&#039;t what you might think.</description>
                    <link>https://techxplore.com/news/2026-03-fragmented-total-screen-main-driver.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 24 Mar 2026 13:20:03 EDT</pubDate>
                    <guid isPermaLink="false">news693575281</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/fragmented-phone-use.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>LLMs and creativity: AI responses show less variety than human ones</title>
                    <description>Can using a large language model (LLM) make a person more creative? Prior work has shown that using LLMs can make creative outputs more homogeneous, but this homogenization could stem from the specific LLM used or from widespread use of the same model.</description>
                    <link>https://techxplore.com/news/2026-03-llms-creativity-ai-responses-variety.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 24 Mar 2026 11:40:07 EDT</pubDate>
                    <guid isPermaLink="false">news693565572</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/llms-and-creativity-ai.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>SoulMate LLM accelerator evolves according to the specific characteristics of the user</title>
                    <description>While large language models (LLMs) like ChatGPT are adept at answering countless questions, they often remain unaware of a user&#039;s minor habits or previous conversational contexts. This is why AI, despite being deeply integrated into our daily lives, can still feel like a &quot;stranger.&quot; Overcoming these limitations, researchers at KAIST, led by Professor Hoi-Jun Yoo from the Graduate School of AI Semiconductors, have developed the world&#039;s first AI semiconductor, dubbed &quot;SoulMate,&quot; which learns and adapts to a user&#039;s speech style, preferences, and emotions in real-time—becoming a true &quot;digital soulmate.&quot;</description>
                    <link>https://techxplore.com/news/2026-03-soulmate-llm-evolves-specific-characteristics.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Mar 2026 09:40:06 EDT</pubDate>
                    <guid isPermaLink="false">news693044581</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/soulmate-llm-accelerat.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI chatbots&#039; tendency to always agree may reinforce delusions in vulnerable users</title>
                    <description>The integration of large language model-based AI chatbots into multiple facets of our everyday lives has opened us up to advantages that would have been considered impossible even a decade ago. The same development has, however, opened us up to unforeseen risks, including the impact that engaging with AI chatbots can have on people dealing with mental illness.</description>
                    <link>https://techxplore.com/news/2026-03-ai-chatbots-tendency-delusions-vulnerable.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 17 Mar 2026 11:00:09 EDT</pubDate>
                    <guid isPermaLink="false">news692963624</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2024/delusion.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Report calls for AI toy safety standards to protect young children</title>
                    <description>AI-powered toys that &quot;talk&quot; with young children should be more tightly regulated and carry new safety kitemarks, according to a report that warns they are not always developed with children&#039;s psychological safety in mind. The recommendation appears in the initial report from &quot;AI in the Early Years&quot;: a University of Cambridge project and the first systematic study of how Generative AI (GenAI) toys capable of human-like conversation may influence development in the critical years up to age five.</description>
                    <link>https://techxplore.com/news/2026-03-ai-toy-safety-standards-young.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Thu, 12 Mar 2026 20:00:02 EDT</pubDate>
                    <guid isPermaLink="false">news692529337</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/report-calls-for-ai-to.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI assistants can sway writers&#039; attitudes, even when they&#039;re watching for bias, experiments indicate</title>
                    <description>Artificial intelligence-powered writing tools such as autocomplete suggestions can definitely change the way people express themselves, but can they also change how they think? Cornell Tech researchers think so.</description>
                    <link>https://techxplore.com/news/2026-03-ai-sway-writers-attitudes-theyre.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 11 Mar 2026 14:00:11 EDT</pubDate>
                    <guid isPermaLink="false">news692352961</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2019/1-typing.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Gray screens and loading delays cut gaming time by 30%</title>
                    <description>You know it&#039;s time to put your phone down, but your thumb finds &quot;Play Again&quot; once more. In an age where digital entertainment never sleeps, willpower alone isn&#039;t enough. As more players, especially the younger generations, face physical and mental health challenges from excessive gaming, ethical design that prioritizes human well-being during development has become more urgent.</description>
                    <link>https://techxplore.com/news/2026-03-gray-screens-delays-gaming.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 03 Mar 2026 11:20:04 EST</pubDate>
                    <guid isPermaLink="false">news691758361</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/gray-screens-and-loadi.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>HEART benchmark assesses ability of LLMs and humans to offer emotional support</title>
                    <description>Large language models (LLMs), artificial intelligence (AI) systems that can process human language and generate texts in response to specific user queries, are now used daily by a growing number of people worldwide. While initially these models were primarily used to quickly source information or produce texts for specific uses, some people have now also started approaching the models with personal issues or concerns.</description>
                    <link>https://techxplore.com/news/2026-02-heart-benchmark-ability-llms-humans.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 24 Feb 2026 11:40:01 EST</pubDate>
                    <guid isPermaLink="false">news691063660</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/new-tool-to-assess-the.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>How eyes affect our perception of a humanoid robot&#039;s mind</title>
                    <description>Eyes are said to be the mirror of the soul. Eyes and gaze direction guide attention, evoke emotions and activate the brain&#039;s social perception mechanisms. Researchers at Tampere University and the University of Bremen conducted a study examining how people perceive the minds of humanoid robots. Mind perception refers to the way humans detect and infer that other people, beings or even objects possess consciousness, emotions and cognitive states.</description>
                    <link>https://techxplore.com/news/2026-02-eyes-affect-perception-humanoid-robot.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Sat, 21 Feb 2026 11:00:05 EST</pubDate>
                    <guid isPermaLink="false">news690562316</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/eyes-can-affect-our-pe.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI chatbots provide less-accurate information to vulnerable users, study shows</title>
                    <description>Large language models (LLMs) have been championed as tools that could democratize access to information worldwide, offering knowledge in a user-friendly interface regardless of a person&#039;s background or location. However, new research from MIT&#039;s Center for Constructive Communication (CCC) suggests these artificial intelligence systems may actually perform worse for the very users who could most benefit from them.</description>
                    <link>https://techxplore.com/news/2026-02-ai-chatbots-accurate-vulnerable-users.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Fri, 20 Feb 2026 13:20:01 EST</pubDate>
                    <guid isPermaLink="false">news690808412</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/study-ai-chatbots-prov.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Most AI bots lack basic safety disclosures, study finds</title>
                    <description>Many people use AI chatbots to plan meals and write emails, AI-enhanced web browsers to book travel and buy tickets, and workplace AI to generate invoices and performance reports. However, a new study of the &quot;AI agent ecosystem&quot; suggests that as these AI bots rapidly become part of everyday life, basic safety disclosure is &quot;dangerously lagging.&quot;</description>
                    <link>https://techxplore.com/news/2026-02-ai-bots-lack-basic-safety.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Thu, 19 Feb 2026 20:00:06 EST</pubDate>
                    <guid isPermaLink="false">news690710462</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/ai-chatbot-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Laughter reveals how we use AI at home</title>
                    <description>Voice assistants such as Alexa are often marketed as smart tools that streamline everyday life. But once the technology moves into people&#039;s homes, interest quickly fades. This is shown by new research in which laughter is used as a key to understanding how people actually use—and understand—artificial intelligence in everyday life. The paper is published in the Proceedings of the 13th International Conference on Human-Agent Interaction.</description>
                    <link>https://techxplore.com/news/2026-02-laughter-reveals-ai-home.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 11:02:32 EST</pubDate>
                    <guid isPermaLink="false">news690634922</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/laughter-reveals-how-w.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>People are overconfident about spotting AI faces, study finds</title>
                    <description>Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated. With AI-generated faces now almost impossible to distinguish from real ones, this misplaced confidence could make individuals and organizations more vulnerable to scammers, fraudsters and bad actors, the researchers warn.</description>
                    <link>https://techxplore.com/news/2026-02-people-overconfident-ai.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 11:00:49 EST</pubDate>
                    <guid isPermaLink="false">news690634801</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/people-are-overconfide.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Personalization features can make LLMs more agreeable, potentially creating a virtual echo chamber</title>
                    <description>Many of the latest large language models (LLMs) are designed to remember details from past conversations or store user profiles, enabling these models to personalize responses. But researchers from MIT and Penn State University found that, over long conversations, such personalization features often increase the likelihood an LLM will become overly agreeable or begin mirroring the individual&#039;s point of view.</description>
                    <link>https://techxplore.com/news/2026-02-personalization-features-llms-agreeable-potentially.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 09:40:02 EST</pubDate>
                    <guid isPermaLink="false">news690629803</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/personalization-featur-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>LLMs violate boundaries during mental health dialogues, study finds</title>
                    <description>Artificial intelligence (AI) agents, particularly those based on large language models (LLMs) like the conversational platform ChatGPT, are now widely used daily by numerous people worldwide. LLMs can generate texts that are highly realistic, to the point that they could be sometimes mistaken for texts written by humans.</description>
                    <link>https://techxplore.com/news/2026-02-llms-violate-boundaries-mental-health.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Sun, 15 Feb 2026 10:50:01 EST</pubDate>
                    <guid isPermaLink="false">news690113534</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/llms-violate-boundarie.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>What chatbots can teach humans about empathy</title>
                    <description>Over half of U.S. adults are using large language models (LLMs)—such as ChatGPT, Gemini and Copilot—in some capacity. Whether using artificial intelligence to create grocery lists, turn oneself into a Muppets character or divulge one&#039;s deepest, darkest secrets, humans are relying more on AI models in their everyday lives, possibly because AI chatbots have been shown to generate responses that make people feel validated, seen and heard.</description>
                    <link>https://techxplore.com/news/2026-02-chatbots-humans-empathy.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 11 Feb 2026 13:01:50 EST</pubDate>
                    <guid isPermaLink="false">news690037261</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/illustrate-interaction.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI decision aids aren&#039;t neutral: Why some users become easier to mislead</title>
                    <description>Guidance based on artificial intelligence (AI) may be uniquely placed to foster biases in humans, leading to less effective decision making, say researchers, who found that people with a positive view of AI may be at higher risk of being misled by AI tools. The study, titled &quot;Examining Human Reliance on Artificial Intelligence in Decision Making,&quot; is published in Scientific Reports.</description>
                    <link>https://techxplore.com/news/2026-02-ai-decision-aids-neutral-users.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 09 Feb 2026 16:42:43 EST</pubDate>
                    <guid isPermaLink="false">news689877722</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/why-relying-on-ai-may.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>How much does chatbot bias influence users? A lot, it turns out</title>
                    <description>Customers are 32% more likely to buy a product after reading a review summary generated by a chatbot than after reading the original review written by a human. That&#039;s because large language models introduce bias, in this case a positive framing, in summaries. That, in turn, affects users&#039; behavior.</description>
                    <link>https://techxplore.com/news/2026-02-chatbot-bias-users-lot.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 09 Feb 2026 12:00:02 EST</pubDate>
                    <guid isPermaLink="false">news689860434</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/how-much-does-chatbot.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Say what&#039;s on your mind, and AI can tell what kind of person you are</title>
                    <description>If you say a few words, generative AI will understand who you are—maybe even better than your close family and friends. A new University of Michigan study found that widely available generative AI models (e.g., ChatGPT, Claude, LLaMa) can predict personality, key behaviors and daily emotions as or even more accurately than those closest to you. The findings appear in the journal Nature Human Behavior.</description>
                    <link>https://techxplore.com/news/2026-01-mind-ai-kind-person.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Fri, 30 Jan 2026 12:32:25 EST</pubDate>
                    <guid isPermaLink="false">news688998721</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/say-whats-on-your-mind.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI can generate a feeling of intimacy that exceeds human connections</title>
                    <description>People can develop emotional closeness to artificial intelligence (AI)—under certain conditions, even more so than to other people. This is shown by a new study conducted by a research team led by Prof. Dr. Markus Heinrichs and Dr. Tobias Kleinert from the Department of Psychology at the University of Freiburg and Prof. Dr. Bastian Schiller from Heidelberg University&#039;s Institute of Psychology. Participants felt a sense of closeness especially when they did not know that they were communicating with AI. The results have been published in Communications Psychology.</description>
                    <link>https://techxplore.com/news/2026-01-ai-generate-intimacy-exceeds-human.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 28 Jan 2026 15:33:46 EST</pubDate>
                    <guid isPermaLink="false">news688836781</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/artificial-intelligenc-9.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Used EVs currently offer car buyers lowest lifetime cost of ownership, study shows</title>
                    <description>Now is a great time for anyone who&#039;s shopping for a used car to consider an electric vehicle, according to new research from the University of Michigan.</description>
                    <link>https://techxplore.com/news/2026-01-evs-car-buyers-lowest-lifetime.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 27 Jan 2026 10:00:06 EST</pubDate>
                    <guid isPermaLink="false">news688725902</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/used-evs-currently-off.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Low-cost system turns smartphones into emergency radiation detectors</title>
                    <description>Prompt, individual-based dose assessment is essential to protect people from the negative consequences of radiation exposure after large-scale nuclear or radiological incidents. However, traditional dosimetry methods often require expensive equipment or complex laboratory analysis.</description>
                    <link>https://techxplore.com/news/2026-01-smartphones-emergency-detectors.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 27 Jan 2026 09:24:26 EST</pubDate>
                    <guid isPermaLink="false">news688728242</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/low-cost-system-turns.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>To explain or not? Online dating experiment shows need for AI transparency depends on user expectation</title>
                    <description>Artificial intelligence (AI) is said to be a &quot;black box,&quot; with its logic obscured from human understanding—but how much does the average user actually care to know how AI works?</description>
                    <link>https://techxplore.com/news/2026-01-online-dating-ai-transparency-user.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 21 Jan 2026 16:47:47 EST</pubDate>
                    <guid isPermaLink="false">news688236421</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/to-explain-or-not-need.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI &#039;CHEF&#039; could help those with cognitive declines complete home tasks</title>
                    <description>In the United States, 11% of adults over age 45 self-report some cognitive decline, which may impact their ability to care for themselves and perform tasks such as cooking or paying bills. A team of Washington University in St. Louis researchers has integrated two novel vision-language models that create a potential artificial intelligence (AI) assistant that could help people remain independent.</description>
                    <link>https://techxplore.com/news/2026-01-ai-chef-cognitive-declines-home.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Tue, 13 Jan 2026 13:10:03 EST</pubDate>
                    <guid isPermaLink="false">news687531792</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-chef-could-help-tho.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Self-driving cars could prevent over 1 million road injuries across the US by 2035</title>
                    <description>Autonomous vehicles could dramatically reduce traffic accidents and injuries on U.S. roads. Drawing on historical data and current trends, a recent JAMA Surgery study projected that self-driving cars could prevent more than 1 million injuries between 2025 and 2035, resulting in a 3.6% reduction in traffic-related injuries over the next decade.</description>
                    <link>https://techxplore.com/news/2026-01-cars-million-road-injuries.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Sun, 04 Jan 2026 11:30:01 EST</pubDate>
                    <guid isPermaLink="false">news686575327</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/autonomous-vehicles-ma.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Feral AI gossip with the potential to spread damage and shame will become more frequent, researchers warn</title>
                    <description>&quot;Feral&quot; gossip spread via AI bots is likely to become more frequent and pervasive, causing reputational damage and shame, humiliation, anxiety, and distress, researchers have warned.</description>
                    <link>https://techxplore.com/news/2025-12-feral-ai-gossip-potential-shame.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 22 Dec 2025 16:18:37 EST</pubDate>
                    <guid isPermaLink="false">news685642681</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/ai-bot-gossip.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Washing machine filter captures microfibers as small as 20 micrometers in size</title>
                    <description>A single laundry load containing synthetic clothing can release thousands of plastic microfibers from nylon, acrylic and polyester materials. Lab testing of an SA-made washing machine filter at Flinders University shows it can be a useful new way to help protect waterways from polyester and other synthetic microparticles.</description>
                    <link>https://techxplore.com/news/2025-12-machine-filter-captures-microfibers-small.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Mon, 22 Dec 2025 12:47:53 EST</pubDate>
                    <guid isPermaLink="false">news685630021</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/device-filters-microfi-1.jpg" width="90" height="90" />
                                    </item>
                        </channel>
</rss>