In the News

Is AI going to destroy everything? The DOOM issue

New reports show that all three major AI companies released models with heightened safeguards after pre-deployment testing couldn't rule out that systems could meaningfully help novices develop biological weapons. If that doesn't make you chill a bit, not sure what it takes, and that's even without talking about autonomous weapons being hacked, medications being hallucinated or AI used to steal elections...

Also, in the last 24 hours, Anthropic released a major report documenting "sneaky sabotage" and chemical weapon assistance in its latest models, while UN and Gartner data confirm that AI has leapt to the #2 global business risk. The primary threat vector has shifted from simple data leaks to Autonomous System Failure, where AI agents executing tasks without human oversight create cascading operational, legal, and kinetic liabilities.

Let's dive in.


1. Robotics & Autonomous Weapons

The Risks of Artificial Intelligence in Weapons Design – how these weapons may make it easier for countries to get involved in conflicts; second, how nonmilitary scientific AI research may be censored or co-opted to support the development of these weapons; and third, how militaries may use AI-powered autonomous technology to reduce or deflect human responsibility in decision-making.

Human Rights Watch: UN Urged to Explicitly Ban Autonomous Weapons – Formal international demands were filed on Jan 28 for a legally binding treaty to prohibit lethal systems that function without meaningful human control.

2. Biological & Chemical Weaponization

TechUK: OpenAI o3 Model Surpasses 94% of Biology Experts – The Feb 3 International AI Safety Report reveals frontier models now match PhD-level performance in troubleshooting complex virology lab protocols.

Inside Global Tech: 2026 Safety Report Documents Escalating Misuse Potential – All three major AI companies released models with heightened safeguards after pre-deployment testing couldn't rule out that systems could meaningfully help novices develop biological weapons.

3. Autonomous Agent Displacement & Malfunction

Allianz: AI Surges to #2 Global Business Risk in 2026 – The 2026 Risk Barometer shows AI jumping 8 spots in a single year, driven by catastrophic concerns over system reliability and autonomous liability.

SOC Prime: MCP Standard Risks Exploitation in Critical Systems – A Feb 11 analysis of the Model Context Protocol (MCP) highlights that standardized connectors between agents and servers increase the risk of lateral movement if hijacked.

4. Information Disorder & Democratic Integrity

MACo Conduit Street: Maryland Moves to Criminalize Election Deepfakes – State-level testimony on Feb 4 supported new legislation prohibiting synthetic media intended to interfere with voting and public trust.

World Economic Forum: Geoeconomic Confrontation and Information Disorder – The WEF 2026 Global Risks Report identifies AI-driven "information disorder" as a primary driver of social polarization and a threat to institutional legitimacy.

CADE: 2026 Safety Report Cites Surge in AI Scams and "Nudify" Apps – The second International AI Safety Report warns of a rising trend in AI-generated intimate imagery without consent, disproportionately targeting and extorting women.

5. Healthcare and Drug Creation risks

The #1 Hazard: Misuse of Non-Regulated Chatbots Patient safety experts officially ranked the unauthorized use of general AI chatbots as the leading threat to healthcare this year. These models frequently "hallucinate" incorrect medical advice or fail to challenge dangerous user assumptions, leading to high-risk diagnostic errors.

University of Basel: AI Models for Drug Design Fail in Physics, Physical Hallucinations in Drug Design Recent testing of drug discovery AI found that models often predict high binding success for molecules that are physically impossible to build. This lack of "physical intuition" leads to massive resource waste in labs attempting to synthesize chemically invalid structures.

Duke University: Hidden Risks of AI Health Advice, Verification Gaps in Clinical Summaries Research published this month highlights that AI often provides "technically correct" medical facts while hallucinating the specific patient context. This creates a risk where an AI’s confident-sounding output can override a clinician’s judgment, masking critical nuances in a patient's history.

Hope this was a useful issue, please get in touch for any feedback and ideas for new deepdives!

Artificial Intelligence Weekly