<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
                    <title>Security News - Software vulnerabilities, data leaks, malware, viruses</title>
            <link>https://techxplore.com/rss-feed/security-news/</link>
            <language>en-us</language>
            <description>The latest news on cyber security, network security, software vulnerabilities, data leaks, malware, and viruses</description>

                            <item>
                    <title>Memristor chip combines security and compute-in-memory for edge devices</title>
                    <description>A cross-institutional research team has developed Co-Located Authentication and Processing (CLAP), a privacy-preserving system that overcomes the trade-off between security and performance in edge computing devices. The study, titled &quot;Privacy-preserving data analysis using a memristor chip with co-located authentication and processing,&quot; is published in Science Advances. The team was led by Professor Ngai Wong and Dr. Zhengwu Liu from the Department of Electrical and Computer Engineering in the Faculty of Engineering at The University of Hong Kong (HKU), in collaboration with Tsinghua University and the Southern University of Science and Technology.</description>
                    <link>https://techxplore.com/news/2026-04-memristor-chip-combines-memory-edge.html</link>
                    <category>Hardware</category>                    <pubDate>Mon, 06 Apr 2026 12:20:04 EDT</pubDate>
                    <guid isPermaLink="false">news694695661</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/privacy-preserving-sys-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI blueprints can be stolen with a single small antenna</title>
                    <description>From smartphone facial recognition to autonomous vehicles, artificial intelligence (AI) has long been protected as a black box. However, a joint research team from KAIST and international institutions has uncovered a new security threat capable of peeking at AI blueprints from behind walls. The team also presented corresponding defense technologies. This discovery is expected to be utilized in strengthening AI security across various sectors, including autonomous driving, health care, and finance.</description>
                    <link>https://techxplore.com/news/2026-04-ai-blueprints-stolen-small-antenna.html</link>
                    <category>Security</category>                    <pubDate>Wed, 01 Apr 2026 15:40:02 EDT</pubDate>
                    <guid isPermaLink="false">news694274582</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-blueprints-stolen-w.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI systems lack a fundamental property of human cognition: Understanding this gap may matter for safety</title>
                    <description>When a person reaches across a table to pass the salt, their brain is doing something far more complex than recognizing a request and executing a movement. It is drawing on a lifetime of bodily experience—where their hand is in space, what a saltshaker feels like, the social awareness of who asked and why. In a fraction of a second, their body and brain are working as one.</description>
                    <link>https://techxplore.com/news/2026-04-ai-lack-fundamental-property-human.html</link>
                    <category>Security</category>                    <pubDate>Wed, 01 Apr 2026 11:00:05 EDT</pubDate>
                    <guid isPermaLink="false">news694252981</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2024/large-language-models.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>North Korea hackers suspected of attack on widely used software tool</title>
                    <description>Hackers linked to North Korea are suspected of an ambitious attack on an inconspicuous but widely used software package, Google analysts and other cybersecurity experts said Wednesday.</description>
                    <link>https://techxplore.com/news/2026-04-north-korea-hackers-widely-software.html</link>
                    <category>Security</category>                    <pubDate>Wed, 01 Apr 2026 04:40:02 EDT</pubDate>
                    <guid isPermaLink="false">news694235959</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2021/hacker.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Who is using differential privacy? A new registry aims to make it visible</title>
                    <description>When Apple discovers trending popular emojis, or when Google reports traffic at a busy restaurant, they&#039;re analyzing large datasets made up of individual people. Those people&#039;s personal information is systematically protected thanks in large part to research by Harvard computer scientists. Now, after two decades of work on the cryptography-adjacent mathematical framework known as differential privacy, researchers in the John A. Paulson School of Engineering and Applied Sciences have reached a key milestone in moving privacy best practices from academia into real-world applications.</description>
                    <link>https://techxplore.com/news/2026-03-differential-privacy-registry-aims-visible.html</link>
                    <category>Security</category>                    <pubDate>Tue, 31 Mar 2026 12:30:05 EDT</pubDate>
                    <guid isPermaLink="false">news694175201</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2020/privacy.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Vibrations in your skull may be your next password</title>
                    <description>A team led by Rutgers University researchers has developed a security system that could change how people log in to virtual and augmented reality platforms by eliminating passwords, personal identification numbers and eye scans and replacing them with something far more seamless.</description>
                    <link>https://techxplore.com/news/2026-03-vibrations-skull-password.html</link>
                    <category>Hi Tech &amp; Innovation</category>                    <pubDate>Tue, 31 Mar 2026 06:39:38 EDT</pubDate>
                    <guid isPermaLink="false">news694157757</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/vibrations-in-your-sku.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Researchers find training gaps impacting maritime cybersecurity readiness</title>
                    <description>Whether it&#039;s a fire or a flood, a ship&#039;s crew can only rely on itself and its training in emergencies at sea. The same is true for crews facing digital threats on oil tankers, cargo ships, and other commercial vessels.</description>
                    <link>https://techxplore.com/news/2026-03-gaps-impacting-maritime-cybersecurity-readiness.html</link>
                    <category>Security</category>                    <pubDate>Mon, 30 Mar 2026 07:10:01 EDT</pubDate>
                    <guid isPermaLink="false">news694071925</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2020/oiltanker.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Photon framework scales AI vulnerability discovery</title>
                    <description>Oak Ridge National Laboratory&#039;s Center for Artificial Intelligence Security Research (CAISER) is shining a light on AI vulnerabilities. While AI models offer tremendous economic, humanitarian and national security potential, they are also increasingly susceptible to exploitation. Identifying and characterizing these vulnerabilities has required considerable intellectual effort and specialized expertise.</description>
                    <link>https://techxplore.com/news/2026-03-photon-framework-scales-ai-vulnerability.html</link>
                    <category>Security</category>                    <pubDate>Sat, 28 Mar 2026 12:30:03 EDT</pubDate>
                    <guid isPermaLink="false">news693633970</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/photon-framework-scale.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Will a new border deal with the US open a backdoor into New Zealanders&#039; personal data?</title>
                    <description>Anyone who has recently traveled to the United States will be familiar with biometric checks—facial and fingerprint scans—used at the border. It is the same technology platform that is used in airports elsewhere in the world. New Zealand&#039;s passports, for instance, are among those that now carry encrypted biometric information, matched to a traveler&#039;s face as they pass through border smart gates.</description>
                    <link>https://techxplore.com/news/2026-03-border-backdoor-zealanders-personal.html</link>
                    <category>Security</category>                    <pubDate>Fri, 27 Mar 2026 10:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news693825541</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2024/airport-security.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Hybrid AI tool unmasks hidden digital abuse to transform forensic investigations</title>
                    <description>Researchers from the University of Huddersfield have developed a hybrid AI tool to detect patterns of psychological abuse, including coercive control, in a bid to transform digital forensic investigations and mental health research.This innovation was developed as a primary outcome of Researcher Dhruv Patel&#039;s Ph.D. work under the mentorship of senior lecturer Dr. Anju Johnson and is designed to address a bottleneck in modern digital forensic investigations, integrating insights from a broader research team to ensure its real-world application.</description>
                    <link>https://techxplore.com/news/2026-03-hybrid-ai-tool-unmasks-hidden.html</link>
                    <category>Security</category>                    <pubDate>Wed, 25 Mar 2026 14:00:08 EDT</pubDate>
                    <guid isPermaLink="false">news693663961</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/hybrid-ai-tool-unmasks.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Thousands of websites are accidentally broadcasting sensitive data, study finds</title>
                    <description>Researchers have discovered a major security leak hiding in plain sight on the internet that could expose the personal data and financial records of millions of people. In a paper published on the arXiv preprint server, Nurullah Demir of Stanford University and colleagues analyzed 10 million websites to see how often API (application programming interfaces) credentials are exposed. These are digital keys or tokens that enable different software programs to communicate and are often used to process bank payments and access cloud storage.</description>
                    <link>https://techxplore.com/news/2026-03-thousands-websites-accidentally-sensitive.html</link>
                    <category>Internet</category>                    <pubDate>Wed, 25 Mar 2026 12:40:04 EDT</pubDate>
                    <guid isPermaLink="false">news693658041</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/thousands-of-websites.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI making cyber attacks costlier and more effective: Munich Re</title>
                    <description>Artificial intelligence is making cyberattacks increasingly sophisticated and costlier for businesses, reinsurer Munich Re said Wednesday, warning of methods ranging from highly personalized phishing emails to computer-generated, convincing fake identities.</description>
                    <link>https://techxplore.com/news/2026-03-ai-cyber-costlier-effective-munich.html</link>
                    <category>Security</category>                    <pubDate>Wed, 25 Mar 2026 10:40:01 EDT</pubDate>
                    <guid isPermaLink="false">news693653202</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/demonstrators-took-to.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>New framework addresses privacy, dignity risks posed by modern AI systems</title>
                    <description>In a new article, researchers introduce the capabilities approach–contextual integrity (CA-CI), a framework that addresses privacy and dignity risks posed by modern artificial intelligence (AI) systems, especially foundation models whose capabilities evolve across contexts and purposes. In a case study, they demonstrate how CA-CI can operationalize the European Union (EU)&#039;s AI Act&#039;s fundamental rights impact assessments, harm thresholds, and anticipatory governance. The article, by researchers at Carnegie Mellon University and the University of Michigan, is published in IEEE Security &amp; Privacy.</description>
                    <link>https://techxplore.com/news/2026-03-framework-privacy-dignity-posed-modern.html</link>
                    <category>Security</category>                    <pubDate>Tue, 24 Mar 2026 16:50:03 EDT</pubDate>
                    <guid isPermaLink="false">news693588841</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2022/artificial-intelligenc-51.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Study finds AI privacy leaks hinge on a few high-impact neural network weights</title>
                    <description>Researchers have discovered that some of the elements of AI neural networks that contribute to data-privacy vulnerabilities are also key to the performance of those models. The researchers used this new information to develop a technique that better balances performance and privacy protection in these models.</description>
                    <link>https://techxplore.com/news/2026-03-ai-privacy-leaks-hinge-high.html</link>
                    <category>Security</category>                    <pubDate>Tue, 24 Mar 2026 12:00:01 EDT</pubDate>
                    <guid isPermaLink="false">news693570229</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/researchers-find-priva.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Sensor chips help identify deepfakes by adding cryptographic signatures to camera data</title>
                    <description>AI-generated images and videos pose a threat to democratic processes and undermine trust within society. Researchers at ETH Zurich have now developed chip technology that enables verification of the authenticity of sensor data including images or videos. Their study is published in the journal Nature Electronics.</description>
                    <link>https://techxplore.com/news/2026-03-sensor-chips-deepfakes-adding-cryptographic.html</link>
                    <category>Hardware</category>                    <pubDate>Tue, 24 Mar 2026 10:30:10 EDT</pubDate>
                    <guid isPermaLink="false">news693565096</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/chips-designed-to-help.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Your smart home can be easily hacked. New safety standards will help, but stay vigilant</title>
                    <description>On a quiet suburban street, a modern Australian home wakes before its owners do.</description>
                    <link>https://techxplore.com/news/2026-03-smart-home-easily-hacked-safety.html</link>
                    <category>Security</category>                    <pubDate>Mon, 23 Mar 2026 14:40:05 EDT</pubDate>
                    <guid isPermaLink="false">news693484587</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/your-smart-home-can-be.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>&#039;Neuron-freezing&#039; technique can stop LLMs from giving users unsafe responses</title>
                    <description>Researchers have identified key components in large language models (LLMs) that play a critical role in ensuring these AI systems provide safe responses to user queries. The researchers used these insights to develop and demonstrate AI training techniques that improve LLM safety while minimizing the &quot;alignment tax,&quot; meaning the AI becomes safer without significantly affecting performance.</description>
                    <link>https://techxplore.com/news/2026-03-neuron-technique-llms-users-unsafe.html</link>
                    <category>Security</category>                    <pubDate>Mon, 23 Mar 2026 12:10:01 EDT</pubDate>
                    <guid isPermaLink="false">news693484740</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ask-ai.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Study: &#039;Security fatigue&#039; may weaken digital defenses</title>
                    <description>From password resets and software updates to phishing alerts and cybersecurity trainings, today&#039;s workplace is filled with constant reminders about digital security. But new research led by the University at Albany&#039;s Massry School of Business suggests those well-intentioned safeguards may be having an unintended effect.</description>
                    <link>https://techxplore.com/news/2026-03-fatigue-weaken-digital-defenses.html</link>
                    <category>Security</category>                    <pubDate>Sat, 21 Mar 2026 14:00:01 EDT</pubDate>
                    <guid isPermaLink="false">news693058574</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/study-security-fatigue.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Engineers devise a way to prevent manufacturing shutdowns during cyberattacks</title>
                    <description>A professor of mechanical and aerospace engineering and a team of Rutgers students are proposing a means to defend manufacturers from cyberattacks—and ensure the uninterrupted production of mission-critical national security and infrastructure parts. Rajiv Malhotra, an associate professor in the Rutgers School of Engineering Department of Mechanical and Aerospace Engineering, proposes using a digital twin framework to improve manufacturing resilience from cyberattacks.</description>
                    <link>https://techxplore.com/news/2026-03-shutdowns-cyberattacks.html</link>
                    <category>Engineering</category>                    <pubDate>Fri, 20 Mar 2026 08:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news693212997</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/engineers-devise-a-way.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Study finds no single tool can fully protect online financial data</title>
                    <description>Research in the journal Electronic Government discusses the growing need for protecting one&#039;s personal financial data as the online world faces increasingly sophisticated cyber threats. The researchers argue that no single measure is sufficient to secure the modern financial ecosystem. As such, they set out a framework that combines technological tools, regulatory oversight, and individual responsibility to combat the problem.</description>
                    <link>https://techxplore.com/news/2026-03-tool-fully-online-financial.html</link>
                    <category>Security</category>                    <pubDate>Wed, 18 Mar 2026 13:20:10 EDT</pubDate>
                    <guid isPermaLink="false">news693058558</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2024/phishing.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI model trained on 14,000 Urdu news stories spots misinformation with 96% accuracy</title>
                    <description>A deep learning model trained on more than 14,000 Pakistani news articles can spot misinformation with 96% accuracy, according to a new report in academic journal Scientific Reports. It&#039;s the most comprehensive artificial intelligence system yet for detecting fake news in Urdu, the world&#039;s 10th most spoken language with more than 170 million speakers worldwide.</description>
                    <link>https://techxplore.com/news/2026-03-ai-urdu-news-stories-misinformation.html</link>
                    <category>Security</category>                    <pubDate>Wed, 18 Mar 2026 12:40:01 EDT</pubDate>
                    <guid isPermaLink="false">news693055441</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/new-ai-could-stop-fake.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>From demons to mega behemoths: How &#039;monstrous&#039; scam networks are growing</title>
                    <description>New research led by the University of Portsmouth uncovers how scammers operate worldwide, dividing them into five &quot;monstrous&quot; categories. Published in the International Journal of Law, Crime and Justice, the study explores how the size of scam groups, specialized roles, and involvement of corrupt actors help scams work more effectively.</description>
                    <link>https://techxplore.com/news/2026-03-demons-mega-behemoths-monstrous-scam.html</link>
                    <category>Business</category>                    <pubDate>Wed, 18 Mar 2026 10:20:03 EDT</pubDate>
                    <guid isPermaLink="false">news693047462</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/hoodie-computer.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Grid vibrations: AI detects power supply cyberattacks in less than two seconds</title>
                    <description>Modern energy infrastructure is increasingly defined as cyber-physical systems where physical power distribution and digital communication are closely tied together. While this digitalization boosts efficiency, it exposes electricity grids to sophisticated cybersecurity risks. To combat such threats, researchers have developed an artificial intelligence (AI) method that integrates network structure analysis with data tracking to identify complex attacks that conventional security systems might miss. Details are reported in the International Journal of Global Energy Issues.</description>
                    <link>https://techxplore.com/news/2026-03-grid-vibrations-ai-power-cyberattacks.html</link>
                    <category>Security</category>                    <pubDate>Mon, 16 Mar 2026 16:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news692890708</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/power-grid.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Why harmful content keeps reaching children online, and what advertising has to do with it</title>
                    <description>Children today can encounter harmful material online with alarming ease, including violent, sexual and self-harm content. While this is often treated as a moderation failure, the deeper cause is economic.</description>
                    <link>https://techxplore.com/news/2026-03-content-children-online-advertising.html</link>
                    <category>Business</category>                    <pubDate>Mon, 16 Mar 2026 13:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news692885581</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2023/kids-online-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI agents can autonomously coordinate propaganda campaigns without human direction</title>
                    <description>Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real.</description>
                    <link>https://techxplore.com/news/2026-03-ai-agents-autonomously-propaganda-campaigns.html</link>
                    <category>Security</category>                    <pubDate>Thu, 12 Mar 2026 14:40:05 EDT</pubDate>
                    <guid isPermaLink="false">news692544181</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-agents-can-autonomo-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>&#039;Privacy by design&#039;: Tech protects against identity leaking during AI photo editing</title>
                    <description>Consumers, businesses, and institutions may soon have private, secure, and trustworthy generative AI tools for editing and sharing profile photos, ID images, and personal pictures without exposing their private identities to external platforms. Purdue University researchers Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty have developed the patent-pending system, which is utilized before and after photos are uploaded to an AI editing platform.</description>
                    <link>https://techxplore.com/news/2026-03-privacy-tech-identity-leaking-ai.html</link>
                    <category>Security</category>                    <pubDate>Thu, 12 Mar 2026 14:20:05 EDT</pubDate>
                    <guid isPermaLink="false">news692543641</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/privacy-by-design-tech.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI-powered defense system stops 5G cyber-attacks in a fraction of a second</title>
                    <description>An AI defense system has successfully detected and neutralized sophisticated 5G cyber-attacks in less than a tenth of a second, paving the way for more secure 5G and future 6G mobile networks, say researchers at the University of Surrey.</description>
                    <link>https://techxplore.com/news/2026-03-ai-powered-defense-5g-cyber.html</link>
                    <category>Security</category>                    <pubDate>Tue, 10 Mar 2026 11:20:07 EDT</pubDate>
                    <guid isPermaLink="false">news692359742</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/hacker.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Can people distinguish between AI-generated and human speech?</title>
                    <description>In a collaboration between Tianjin University and the Chinese University of Hong Kong, researchers led by Xiangbin Teng used behavioral and brain activity measures to explore whether people can discern between AI-generated and human speech. The researchers also assessed whether brief training improves this ability. This work is published in eNeuro.</description>
                    <link>https://techxplore.com/news/2026-03-people-distinguish-ai-generated-human.html</link>
                    <category>Security</category>                    <pubDate>Mon, 09 Mar 2026 13:00:02 EDT</pubDate>
                    <guid isPermaLink="false">news692264641</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2025/-listen-to-music.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>New &#039;negative light&#039; technology hides data transfers in plain sight</title>
                    <description>Engineers at UNSW Sydney and Monash have developed an innovative way of sending hidden information that&#039;s hard to intercept. Using a phenomenon known as &quot;negative luminescence,&quot; the system works by making signals blend perfectly into the background of natural heat radiation, such as can be seen with a thermal camera.</description>
                    <link>https://techxplore.com/news/2026-03-negative-technology-plain-sight.html</link>
                    <category>Telecom</category>                    <pubDate>Mon, 09 Mar 2026 11:00:03 EDT</pubDate>
                    <guid isPermaLink="false">news692270762</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/new-negative-light-tec.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI fake-news detectors may look accurate but fail in real use, study finds</title>
                    <description>A dubious link from a friend. A headline too sensational to be true. A video that seems fake but you can&#039;t be sure. As online misinformation grows harder to detect, new artificial-intelligence tools promise to help us separate fact from fiction. But do they actually work?</description>
                    <link>https://techxplore.com/news/2026-03-ai-fake-news-detectors-accurate.html</link>
                    <category>Security</category>                    <pubDate>Mon, 09 Mar 2026 08:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news692262086</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/people-smartphones.jpg" width="90" height="90" />
                                    </item>
                        </channel>
</rss>