<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
                    <title>Security News - Software vulnerabilities, data leaks, malware, viruses</title>
            <link>https://techxplore.com/rss-feed/security-news/</link>
            <language>en-us</language>
            <description>The latest news on cyber security, network security, software vulnerabilities, data leaks, malware, and viruses</description>

                            <item>
                    <title>Vibrations in your skull may be your next password</title>
                    <description>A team led by Rutgers University researchers has developed a security system that could change how people log in to virtual and augmented reality platforms by eliminating passwords, personal identification numbers and eye scans and replacing them with something far more seamless.</description>
                    <link>https://techxplore.com/news/2026-03-vibrations-skull-password.html</link>
                    <category>Hi Tech &amp; Innovation</category>                    <pubDate>Tue, 31 Mar 2026 06:39:38 EDT</pubDate>
                    <guid isPermaLink="false">news694157757</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/vibrations-in-your-sku.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Hybrid AI tool unmasks hidden digital abuse to transform forensic investigations</title>
                    <description>Researchers from the University of Huddersfield have developed a hybrid AI tool to detect patterns of psychological abuse, including coercive control, in a bid to transform digital forensic investigations and mental health research.This innovation was developed as a primary outcome of Researcher Dhruv Patel&#039;s Ph.D. work under the mentorship of senior lecturer Dr. Anju Johnson and is designed to address a bottleneck in modern digital forensic investigations, integrating insights from a broader research team to ensure its real-world application.</description>
                    <link>https://techxplore.com/news/2026-03-hybrid-ai-tool-unmasks-hidden.html</link>
                    <category>Security</category>                    <pubDate>Wed, 25 Mar 2026 14:00:08 EDT</pubDate>
                    <guid isPermaLink="false">news693663961</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/hybrid-ai-tool-unmasks.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Thousands of websites are accidentally broadcasting sensitive data, study finds</title>
                    <description>Researchers have discovered a major security leak hiding in plain sight on the internet that could expose the personal data and financial records of millions of people. In a paper published on the arXiv preprint server, Nurullah Demir of Stanford University and colleagues analyzed 10 million websites to see how often API (application programming interfaces) credentials are exposed. These are digital keys or tokens that enable different software programs to communicate and are often used to process bank payments and access cloud storage.</description>
                    <link>https://techxplore.com/news/2026-03-thousands-websites-accidentally-sensitive.html</link>
                    <category>Internet</category>                    <pubDate>Wed, 25 Mar 2026 12:40:04 EDT</pubDate>
                    <guid isPermaLink="false">news693658041</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/thousands-of-websites.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Study finds AI privacy leaks hinge on a few high-impact neural network weights</title>
                    <description>Researchers have discovered that some of the elements of AI neural networks that contribute to data-privacy vulnerabilities are also key to the performance of those models. The researchers used this new information to develop a technique that better balances performance and privacy protection in these models.</description>
                    <link>https://techxplore.com/news/2026-03-ai-privacy-leaks-hinge-high.html</link>
                    <category>Security</category>                    <pubDate>Tue, 24 Mar 2026 12:00:01 EDT</pubDate>
                    <guid isPermaLink="false">news693570229</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/researchers-find-priva.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Sensor chips help identify deepfakes by adding cryptographic signatures to camera data</title>
                    <description>AI-generated images and videos pose a threat to democratic processes and undermine trust within society. Researchers at ETH Zurich have now developed chip technology that enables verification of the authenticity of sensor data including images or videos. Their study is published in the journal Nature Electronics.</description>
                    <link>https://techxplore.com/news/2026-03-sensor-chips-deepfakes-adding-cryptographic.html</link>
                    <category>Hardware</category>                    <pubDate>Tue, 24 Mar 2026 10:30:10 EDT</pubDate>
                    <guid isPermaLink="false">news693565096</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/chips-designed-to-help.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>&#039;Neuron-freezing&#039; technique can stop LLMs from giving users unsafe responses</title>
                    <description>Researchers have identified key components in large language models (LLMs) that play a critical role in ensuring these AI systems provide safe responses to user queries. The researchers used these insights to develop and demonstrate AI training techniques that improve LLM safety while minimizing the &quot;alignment tax,&quot; meaning the AI becomes safer without significantly affecting performance.</description>
                    <link>https://techxplore.com/news/2026-03-neuron-technique-llms-users-unsafe.html</link>
                    <category>Security</category>                    <pubDate>Mon, 23 Mar 2026 12:10:01 EDT</pubDate>
                    <guid isPermaLink="false">news693484740</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ask-ai.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI agents can autonomously coordinate propaganda campaigns without human direction</title>
                    <description>Imagine it is two weeks before a major election in a closely contested state. A controversial ballot measure is on the line. Suddenly, a wave of posts floods X, Reddit, and Facebook, all pushing the same narrative, all amplifying each other, all generating the appearance of a massive grassroots movement. Except none of it is real.</description>
                    <link>https://techxplore.com/news/2026-03-ai-agents-autonomously-propaganda-campaigns.html</link>
                    <category>Security</category>                    <pubDate>Thu, 12 Mar 2026 14:40:05 EDT</pubDate>
                    <guid isPermaLink="false">news692544181</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-agents-can-autonomo-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>&#039;Privacy by design&#039;: Tech protects against identity leaking during AI photo editing</title>
                    <description>Consumers, businesses, and institutions may soon have private, secure, and trustworthy generative AI tools for editing and sharing profile photos, ID images, and personal pictures without exposing their private identities to external platforms. Purdue University researchers Vaneet Aggarwal, Dipesh Tamboli, and Vineet Punyamoorty have developed the patent-pending system, which is utilized before and after photos are uploaded to an AI editing platform.</description>
                    <link>https://techxplore.com/news/2026-03-privacy-tech-identity-leaking-ai.html</link>
                    <category>Security</category>                    <pubDate>Thu, 12 Mar 2026 14:20:05 EDT</pubDate>
                    <guid isPermaLink="false">news692543641</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/privacy-by-design-tech.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI-powered defense system stops 5G cyber-attacks in a fraction of a second</title>
                    <description>An AI defense system has successfully detected and neutralized sophisticated 5G cyber-attacks in less than a tenth of a second, paving the way for more secure 5G and future 6G mobile networks, say researchers at the University of Surrey.</description>
                    <link>https://techxplore.com/news/2026-03-ai-powered-defense-5g-cyber.html</link>
                    <category>Security</category>                    <pubDate>Tue, 10 Mar 2026 11:20:07 EDT</pubDate>
                    <guid isPermaLink="false">news692359742</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/hacker.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>New &#039;negative light&#039; technology hides data transfers in plain sight</title>
                    <description>Engineers at UNSW Sydney and Monash have developed an innovative way of sending hidden information that&#039;s hard to intercept. Using a phenomenon known as &quot;negative luminescence,&quot; the system works by making signals blend perfectly into the background of natural heat radiation, such as can be seen with a thermal camera.</description>
                    <link>https://techxplore.com/news/2026-03-negative-technology-plain-sight.html</link>
                    <category>Telecom</category>                    <pubDate>Mon, 09 Mar 2026 11:00:03 EDT</pubDate>
                    <guid isPermaLink="false">news692270762</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/new-negative-light-tec.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI fake-news detectors may look accurate but fail in real use, study finds</title>
                    <description>A dubious link from a friend. A headline too sensational to be true. A video that seems fake but you can&#039;t be sure. As online misinformation grows harder to detect, new artificial-intelligence tools promise to help us separate fact from fiction. But do they actually work?</description>
                    <link>https://techxplore.com/news/2026-03-ai-fake-news-detectors-accurate.html</link>
                    <category>Security</category>                    <pubDate>Mon, 09 Mar 2026 08:20:01 EDT</pubDate>
                    <guid isPermaLink="false">news692262086</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/people-smartphones.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>How AI could end online anonymity</title>
                    <description>The internet is rife with anonymous accounts as users adopt pseudonyms, sometimes for genuine reasons like speaking freely, and other times for nefarious ones. But this era of online privacy could be coming to a close. In a study available on the arXiv preprint server, researchers demonstrate that large language models (LLMs) can identify the people behind these accounts at scale.</description>
                    <link>https://techxplore.com/news/2026-03-ai-online-anonymity.html</link>
                    <category>Security</category>                    <pubDate>Wed, 04 Mar 2026 12:40:04 EST</pubDate>
                    <guid isPermaLink="false">news691850205</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-could-end-online-an.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Deepfake songs are exploding, but a new tool shuts them down</title>
                    <description>Artificial intelligence models can now clone a voice with just a few seconds of audio, fueling a surge of deepfake songs online and creating a growing crisis for musicians who don&#039;t want their voices hijacked. Beyond the obvious intellectual property rights issue, this can lead to lost revenue and take an emotional toll on artists who put their heart and soul into their songs. But researchers now have a solution.</description>
                    <link>https://techxplore.com/news/2026-03-deepfake-songs-tool.html</link>
                    <category>Security</category>                    <pubDate>Tue, 03 Mar 2026 17:20:10 EST</pubDate>
                    <guid isPermaLink="false">news691780081</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/bad-bunny.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI often escalates to nuclear action in war games</title>
                    <description>There are some things perhaps we might not want artificial intelligence to handle, at least for the time being. When leading chatbots were put through war-game simulations, they opted for nuclear signaling or escalation in 95% of cases.</description>
                    <link>https://techxplore.com/news/2026-03-ai-escalates-nuclear-action-war.html</link>
                    <category>Security</category>                    <pubDate>Mon, 02 Mar 2026 13:20:03 EST</pubDate>
                    <guid isPermaLink="false">news691678634</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ai-chat-2.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Your car&#039;s tire sensors could be used to track you</title>
                    <description>Researchers at IMDEA Networks Institute, together with European partners, have found that tire pressure sensors in modern cars can unintentionally expose drivers to tracking. Over a ten-week study, they collected signals from more than 20,000 vehicles, revealing a hidden privacy risk and highlighting the need for stronger security measures in future vehicle sensor systems.</description>
                    <link>https://techxplore.com/news/2026-02-car-sensors-track.html</link>
                    <category>Security</category>                    <pubDate>Wed, 25 Feb 2026 17:00:05 EST</pubDate>
                    <guid isPermaLink="false">news691252054</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/your-cars-tire-sensors-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Jailbreaking the matrix: How researchers are bypassing AI guardrails to make them safer</title>
                    <description>A paper written by University of Florida Computer &amp; Information Science &amp; Engineering, or CISE, Professor Sumit Kumar Jha, Ph.D., contains so many science fiction terms, you&#039;d be forgiven for thinking it&#039;s a Hollywood script: Nullspace steering. Red teaming. Jailbreaking the matrix. But Jha&#039;s work is decidedly focused on real life, most notably strengthening the security measures built into AI tools to ensure they are safe for all to use.</description>
                    <link>https://techxplore.com/news/2026-02-jailbreaking-matrix-bypassing-ai-guardrails.html</link>
                    <category>Security</category>                    <pubDate>Sun, 22 Feb 2026 12:00:01 EST</pubDate>
                    <guid isPermaLink="false">news690803473</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/security-ai.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI model edits can leak sensitive data via update &#039;fingerprints&#039;</title>
                    <description>Artificial intelligence (AI) systems are now widely used by millions of people worldwide, as tools to source information or tackle specific tasks more rapidly and efficiently. Today, some of the most used are large language models (LLMs), computational models trained on large collections of texts that can process and generate written content in various languages.</description>
                    <link>https://techxplore.com/news/2026-02-ai-leak-sensitive-fingerprints.html</link>
                    <category>Security</category>                    <pubDate>Sat, 21 Feb 2026 12:00:02 EST</pubDate>
                    <guid isPermaLink="false">news690725126</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/editing-ai-models-coul.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AI &#039;blind spot&#039; could allow attackers to hijack self-driving vehicles</title>
                    <description>A newly discovered vulnerability could allow cybercriminals to silently hijack the artificial intelligence (AI) systems in self-driving cars, raising concerns about the security of autonomous systems increasingly used on public roads. Georgia Tech cybersecurity researchers discovered the vulnerability, dubbed VillainNet, and found it can remain dormant in a self-driving vehicle&#039;s AI system until triggered by specific conditions. Once triggered, VillainNet is almost certain to succeed, giving attackers control of the targeted vehicle.</description>
                    <link>https://techxplore.com/news/2026-02-ai-hijack-vehicles.html</link>
                    <category>Security</category>                    <pubDate>Fri, 20 Feb 2026 14:40:05 EST</pubDate>
                    <guid isPermaLink="false">news690809490</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/researchers-warn-ai-bl.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Cyber-attacks could disrupt smart factories by targeting time itself</title>
                    <description>A cyber-attack does not always need to steal data or shut systems down to cause damage. Sometimes it only needs to shift the clock. Researchers at the University of East London (UEL), in collaboration with industry, have identified a critical weakness in the timing systems that keep modern automated industries running—and warn attackers could exploit it to quietly destabilize factories, robotics and other safety‑critical infrastructure. The work is published as a comprehensive analysis of threats to Time‑Triggered Ethernet (TTEthernet) clock synchronization in the Industrial Internet of Things (IIoT).</description>
                    <link>https://techxplore.com/news/2026-02-cyber-disrupt-smart-factories.html</link>
                    <category>Business</category>                    <pubDate>Thu, 19 Feb 2026 13:25:22 EST</pubDate>
                    <guid isPermaLink="false">news690729901</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/cyber-attacks-could-di.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>People are overconfident about spotting AI faces, study finds</title>
                    <description>Most people believe they can spot AI-generated faces, but that confidence is out of date, research from UNSW Sydney and the Australian National University (ANU) has demonstrated. With AI-generated faces now almost impossible to distinguish from real ones, this misplaced confidence could make individuals and organizations more vulnerable to scammers, fraudsters and bad actors, the researchers warn.</description>
                    <link>https://techxplore.com/news/2026-02-people-overconfident-ai.html</link>
                    <category>Consumer &amp; Gadgets</category>                    <pubDate>Wed, 18 Feb 2026 11:00:49 EST</pubDate>
                    <guid isPermaLink="false">news690634801</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/people-are-overconfide.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Why &#039;zero-knowledge encryption&#039; may not stop password theft if servers are hacked</title>
                    <description>People who regularly use online services have between 100 and 200 passwords. Very few can remember every single one. Password managers are therefore extremely helpful, allowing users to access all their passwords with just a single master password.</description>
                    <link>https://techxplore.com/news/2026-02-knowledge-encryption-password-theft-servers.html</link>
                    <category>Computer Sciences</category>                    <pubDate>Mon, 16 Feb 2026 11:10:02 EST</pubDate>
                    <guid isPermaLink="false">news690461993</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2020/password.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Cybersecurity spending may pay off: Study links readiness to stronger returns</title>
                    <description>The infamous Target data breach during the 2013 holiday shopping season, which cost the company more than $200 million in damages, has since been hailed as a landmark case in cybersecurity. Exposure to these threats has only increased as businesses continue to expand their digital footprints. That&#039;s why, as a new study involving Binghamton University&#039;s School of Management found, businesses that sufficiently prepare to defend against cyberattacks are also more likely to perform better financially.</description>
                    <link>https://techxplore.com/news/2026-02-cybersecurity-pay-links-readiness-stronger.html</link>
                    <category>Security</category>                    <pubDate>Sat, 14 Feb 2026 11:00:05 EST</pubDate>
                    <guid isPermaLink="false">news690114174</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/cybersecurity.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Anthropic&#039;s &#039;anonymous&#039; interviews cracked with an LLM</title>
                    <description>In December, the artificial intelligence company Anthropic unveiled its newest tool, Interviewer, used in its initial implementation &quot;to help understand people&#039;s perspectives on AI,&quot; according to a press release. As part of Interviewer&#039;s launch, Anthropic publicly released 1,250 anonymized interviews conducted on the platform.</description>
                    <link>https://techxplore.com/news/2026-02-anthropic-anonymous-llm.html</link>
                    <category>Security</category>                    <pubDate>Tue, 10 Feb 2026 14:25:32 EST</pubDate>
                    <guid isPermaLink="false">news689955902</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/anthropics-anonymous-i.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Scientists camouflage heart rate from invasive radar-based surveillance</title>
                    <description>It&#039;s a typical workday and you sign onto your computer. Unbeknownst to you, a high-frequency sensing system embedded in your work device is now tracking your heart rate, allowing your employer to monitor your breaks, engagement, and stress levels and infer alertness. It sounds like a dystopian scenario, but some believe it&#039;s not so far from current reality.</description>
                    <link>https://techxplore.com/news/2026-02-scientists-camouflage-heart-invasive-radar.html</link>
                    <category>Hi Tech &amp; Innovation</category>                    <pubDate>Mon, 09 Feb 2026 16:36:23 EST</pubDate>
                    <guid isPermaLink="false">news689877361</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/scientists-camouflage.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Unhackable metasurface holograms: Security technology can lock information with light color and distance</title>
                    <description>A research team led by Professor Junsuk Rho at POSTECH (Pohang University of Science and Technology) has developed a secure hologram platform that operates solely based on the wavelength of light and the spacing between metasurface layers. The technology makes hacking and counterfeiting virtually impossible, and is expected to be widely adopted for security cards, anti-counterfeiting, and military communications. The paper is published in the journal Advanced Functional Materials.</description>
                    <link>https://techxplore.com/news/2026-02-unhackable-metasurface-holograms-technology-distance.html</link>
                    <category>Hi Tech &amp; Innovation</category>                    <pubDate>Tue, 03 Feb 2026 13:23:59 EST</pubDate>
                    <guid isPermaLink="false">news689347382</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/a-new-security-technol.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>The next generation of disinformation: AI swarms can threaten democracy by manufacturing fake public consensus</title>
                    <description>An international research team involving Konstanz scientist David Garcia warns that the next generation of influence operations may not look like obvious &quot;copy-paste bots,&quot; but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale.</description>
                    <link>https://techxplore.com/news/2026-01-generation-disinformation-ai-swarms-threaten.html</link>
                    <category>Security</category>                    <pubDate>Fri, 23 Jan 2026 14:10:01 EST</pubDate>
                    <guid isPermaLink="false">news688399110</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/manufacturing-fake-pub.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Stress-testing AI vision systems: Rethinking how adversarial images are generated</title>
                    <description>Deep neural networks (DNNs) have become a cornerstone of modern AI technology, driving a thriving field of research in image-related tasks. These systems have found applications in medical diagnosis, automated data processing, computer vision, and various forms of industrial automation, to name a few.</description>
                    <link>https://techxplore.com/news/2026-01-stress-ai-vision-rethinking-adversarial.html</link>
                    <category>Security</category>                    <pubDate>Fri, 23 Jan 2026 12:06:58 EST</pubDate>
                    <guid isPermaLink="false">news688392384</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/stress-testing-ai-visi.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows</title>
                    <description>As a self-driving car cruises down a street, it uses cameras and sensors to perceive its environment, taking in information on pedestrians, traffic lights, and street signs. Artificial intelligence (AI) then processes that visual information so the car can navigate safely.</description>
                    <link>https://techxplore.com/news/2026-01-text-physical-world-hijack-ai.html</link>
                    <category>Security</category>                    <pubDate>Wed, 21 Jan 2026 15:00:01 EST</pubDate>
                    <guid isPermaLink="false">news688229852</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/misleading-text-in-the.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>The sky is full of secrets: Glaring vulnerabilities discovered in satellite communications</title>
                    <description>With $800 of off‐the‐shelf equipment and months&#039; worth of patience, a team of U.S. computer scientists set out to find out how well geostationary satellite communications are encrypted. And what they found was shocking.</description>
                    <link>https://techxplore.com/news/2026-01-sky-full-secrets-glaring-vulnerabilities.html</link>
                    <category>Telecom</category>                    <pubDate>Tue, 20 Jan 2026 10:23:20 EST</pubDate>
                    <guid isPermaLink="false">news688126982</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/the-sky-is-full-of-sec-1.jpg" width="90" height="90" />
                                    </item>
                            <item>
                    <title>AIs behaving badly: An AI trained to deliberately make bad code will become bad at unrelated tasks, too</title>
                    <description>Artificial intelligence models that are trained to behave badly on a narrow task may generalize this behavior across unrelated tasks, such as offering malicious advice, suggests a new study. The research probes the mechanisms that cause this misaligned behavior, but further work must be done to find out why it happens and how to prevent it.</description>
                    <link>https://techxplore.com/news/2026-01-ais-badly-ai-deliberately-bad.html</link>
                    <category>Security</category>                    <pubDate>Thu, 15 Jan 2026 09:34:00 EST</pubDate>
                    <guid isPermaLink="false">news687691921</guid>
                                            <media:thumbnail url="https://scx1.b-cdn.net/csz/news/tmb/2026/ais-behaving-badly.jpg" width="90" height="90" />
                                    </item>
                        </channel>
</rss>