Intelligence Featured

Intelphreak - February 11, 2025

Trump Disbands DHS Advisory Boards, LLM Jailbreaks Are Fueling Threats, DeepSeek Has Global Impact, Vulnerabilities in Contec Patient Monitors Allow PII Leakage & RCE, Subaru STARLINK Vulnerability Allowed Unauthorized Remote Control of Vehicles, 2 Vulnerabilities in Apple Hardware

ResidentGood, SirPicklJohn
· 18 min read
Send by email
Presented By ResidentGood & SirPicklJohn of Infophreak

Precedence: Routine

BLUF: Trump Disbands All U.S. Department of Homeland Security Advisory Boards, AI/LLM Jailbreaks and Malicious Usage Are Fueling Threats, New AI Platform DeepSeek Sends Waves Around the Globe With Its Security Implications & Affordability, Vulnerabilities in 2 Models of Contec Patient Monitors Allow PII Leakage & Remote Code Execution, Researchers Discovered Subaru STARLINK Vulnerability That Allowed Unauthorized Remote Access & Control of Vehicles, Researchers Find 2 Vulnerabilities in Apple CPU Hardware Across All Devices


BEGIN TEARLINE

[National Security] Trump Axes All Department of Homeland Security Advisory Committees - What This Could Mean for U.S. Security

For the cited reasons of cutting costs, stopping resource misuse, and ensuring the Department of Homeland Security (DHS) stays aligned with its goals of national security, the following DHS advisory boards/committees have been disbanded. While their members have been invited to reapply, it is likely that some of these boards will not be coming back:

  • Academic Engagement Committee - Engaged support from academia in homeland security efforts.
  • Artificial Intelligence Safety and Security Board - Advocated for safe, secure, and trustworthy development and use of AI.
  • Cyber Safety Review Board (CSRB) - Public-private initiative that assessed significant cybersecurity events, while providing recommendations on improving cybersecurity and incident response practices. This board was investigating the recent cyberattacks on U.S. telecommunication providers like Salt Typhoon (see more on these attacks here).
  • Critical Infrastructure Partnership Advisory Council - Provided advice to government officials on critical infrastructure issues.
  • Data Privacy and Integrity Advisory Committee - An academia, private, and public initiative to raise all concerns with PII, data integrity, and privacy to the DHS Chief Privacy Officer.
  • Faith-Based Security Advisory Council - Provided advice and recommendations on matters related to protecting houses of worship and religious communities/organizations from targeted violence and terrorism. Also fosters security preparedness and coordination within the faith community. Created in 2022 amidst many shootings and mass killings at places of worship.
  • Homeland Security Advisory Council - Advised the Secretary on matters related to homeland security and anti-terrorism efforts. Included CSEA (Combatting Online Child Sexual Exploitation and Abuse) subcommittee. Membership included DEA admins, Homeland security secretaries, police union presidents, and Fortune 500 CEOs.
  • National Infrastructure Advisory Council - Provided advice to government officials regarding national security (telecommunications and critical infrastructure).
  • Privacy and Civil Liberties Oversight Board (PCLOB) - Intelligence program watchdog agency (i.e., protected Americans and exposed surveillance abuse). Established after September 11th terrorist attacks to investigate any national security work that could infringe on individual civil rights.
  • USSS Cyber Investigations Advisory Board - Provided advice to the Secret Service in regard to cyber investigations.
  • Federal School Safety Clearinghouse External Advisory Board - Shared resources and best practices related to school safety (not cyber-related, but shows the impact this sweeping executive order had).
  • “…bodies that provide advice to government officials on issues including emergency preparedness, telecommunications, science and technology, as well as artificial intelligence and cybersecurity.“
  • …and more!

The implications of these boards being disbanded has potentially profound national security considerations, with conflicting opinions from experts:

  • Some think this will improve DHS efficiency, funds-allocation efficacy, and intolerance of advisory committees that push agendas that "undermine its national security mission, the President's agenda or Constitutional rights of Americans" (Arstechnica);
  • Some shrug off the decision as a normal part of Presidential administration changes; and
  • Some offer scathing criticism of the decision, saying it undermines national security initiatives.

What brought this issue to light was the dismantling of the CSRB (Cyber Safety Review Board), which received lots of attention from the cybersecurity community. The board provided industry-appreciated insights into Log4Shell, the Microsoft Storm-0558 breach, and the LAPSUS$ Cybercrime Group, but has also been criticized for not offering any unique insights that private threat intelligence organizations have also found in their own investigations.

Analyst Comments: (SirPicklJohn) What do you think of Trump's recent executive actions? Do you think they will prove to be effective at reducing inefficiencies and inefficacies, or will they cause major national security problems? It's hard to say for certain at this point, but it is quite concerning to see such damage dealt to the collaboration between the DHS, academic institutions, and private sector leaders. It is also concerning to see the boards that champion safe and ethical AI development, as well as American data privacy, get dismantled as well. Keep yourself calm and informed, discuss your ideas with others, and as always, stay safe!

[Adversary TTPs] Exploiting AI in 2025: How Modern Threats Are Taking Advantage of LLMs

As we step into 2025, the increased accessibility and rapid evolution of large language models (LLMs) has caught the eye of cyber defenders and threat actors alike. Over 57 distinct threat actor groups have been observed abusing Google's Gemini AI alone in the effort to enhance their operational capabilities. Additionally, totally-uncensored AI chatbots (A.K.A. "underground LLMs", like WormGPT, WolfGPT, EscapeGPT, FraudGPT, and GhostGPT) without any regard for ethical or legal guidelines have been available for rent to cybercriminals on hacking forums and Telegram. Additionally, the accessibility and power of mainstream AI models has prompted threat actors to jailbreak or otherwise abuse them, the most striking example of this in recent history occurring with DeepSeek's AI model. While the release of DeepSeek's AI model made waves with its relatively cheap production and high performance (as compared to mainstream Western LLMs), their severe lack of security measures have made them subject to ruthless security testing, exploitation, and abuse. See ResidentGood's story in this Intelphreak report for more about DeepSeek and its security measures, and see below for examples of its models' susceptibility to jailbreaking that allows malicious actors to use it for nefarious reasons (also see this linked blog for security tests ran by KELA AiFort and SirPicklJohn that demonstrate the potential consequences of this susceptibility).

While no novel AI-powered attacks have been observed so far, threat actors have been caught using LLMs for the following purposes:

  • Malware development, including writing malicious base code for further development, writing polymorphic (detection evasive) malware, rewriting malware in different programming languages, and adding more functionality to existing malware
  • Identifying and exploiting vulnerabilities in software
  • Strategic guidance in launching attacks
  • Forming persuasive, highly-personalized, and localized phishing templates
  • Enabling influence operations (i.e., designing fraudulent websites, social media accounts, and social media posts for misinformation and propaganda)
  • Writing highly-qualified but fake resumes and cover letters to support schemes to infiltrate companies' IT workforces
  • Illicit content generation
  • Research on organizations, vulnerabilities, and other "malicious knowledge" ranging anywhere from building a bomb to launching a ransomware attack

To further highlight how modern LLMs can dramatically aid criminals (both cyber and non-cyber) in their illicit activities, see this collection of tests and screenshots that showcase DeepSeek providing detailed guidance on committing illicit activities.

Analyst Comments: (SirPicklJohn) I hope this deep-dive comprehensively covered what the public security implications of weak or malicious LLMs are!

[New Tech] DeepSeek & The AI Chessboard: Where the Pieces Fall

What is DeepSeek?

DeepSeek is an Artificial Intelligence (AI) startup company headquartered in Hangzhou, China. It was the focus of media attention at the end of January & the beginning of February this year, although it was first released in November of 2024. DeepSeek topped the charts on the Apple App Store at the end of January, ranking higher than OpenAI’s ChatGPT application. The main appeal of DeepSeek is that you can get roughly the same complex functionality as ChatGPT for much smaller compute expenses (although the equivalency of the two programs has been debated).

Security Concerns & Vulnerabilities Surrounding DeepSeek

  • DeepSeek faced a large-scale cyber attack that caused them to limit registration on the website, locking it down so only existing users could use the models. The method of attack has remained undisclosed but theories & research point towards DDoS attacks on DeepSeek’s API & Web Chat.
  • The threat intelligence company Kela released an article detailing how DeepSeek’s models could be jailbroken to get responses to unethical or illegal prompts. SirPicklJohn has released a detailed blog about jailbreaking DeepSeek. Other AI platforms such as ChatGPT & Alibaba Qwen are open to similar threat actor manipulation.
  • In 2024, a security researcher, Johann Rehberger, wrote about a flaw in DeepSeek that allows a threat actor to take over a user’s account by injecting a Cross-Site Scripting payload into the prompt.
  • DeepSeek poses some privacy concerns as user data is stored on servers located in China. DeepSeek sends user login information to China Mobile which is a state-sponsored telecommunications organization. User data is also sent to Volcano Engine cloud servers that are owned by ByteDance (the same ByteDance who owns TikTok). Many are pointing to these concerns as justification that DeepSeek is a risk to US citizens & national security in the same vein as TikTok. Others say that it is fearmongering over geopolitics, and that DeepSeek poses no greater privacy concerns than other companies such as Meta or OpenAI, the only difference being the country who is collecting your data.
  • The iOS DeepSeek application was found to be transmitting user data over the internet without any form of encryption. Not using encryption poses risks to confidentiality & data integrity. DeepSeek’s Android version (1.0.8) was found to have glaring security weaknesses including hardcoded encryption keys and susceptibility to SQL injection. DeepSeek’s apps also employ anti-debugging techniques that hinder security analysis of the application.
  • Malicious python packages were released on the Python Package Index that looked like they were related to DeepSeek. The 2 packages, deepseeek & deepseekai, tuned out to be infostealer malware that harvested API keys, credentials, and system data from developers. This data was sent back to a C2 server via Pipedream automation software. By the time the packages were reported and quarantined, over 200 developers had downloaded them.
  • Researchers discovered 2 databases that contained over 1 million plaintext logs of prompt input from its users. The databases were open to the public and allowed unauthenticated SQL queries. Backend information, API keys, and metadata were also exposed.

US Plans for American AI

Sam Altman, the CEO of OpenAI, in a statement on his X account said that DeepSeek R1 is an impressive model and they welcome some competition, however OpenAI will continue to deliver better models & they require more compute than ever to advance AI research. OpenAI is looking for properties around the US to build data centers that can support the compute needs of their AI research. OpenAI is a part of joint venture with Oracle and Softbank called Stargate. Stargate received an investment of $100 billion from the Trump administration. The US president bragged about this project, saying he will eventually increase the government’s investment into Stargate up to $500 billion in order to help the US stay on top of the AI arms race. The first site resulting from this project will be located in Abilene, Texas and has been under construction for months.

Trump’s administration published a request via the Office of Science & Technology Policy for ideas on future plans for how to keep America a leader in the AI industry. The National Artificial Intelligence Advisory Committee voted on a 10-point list of actionable priorities for AI. Focus areas of the plan include the labor market, education, science, health, public policy, and law enforcement. Trump signed an executive order on January 23rd regarding developing AI systems free from ideological bias or engineered social agendas. The order revokes existing AI policies and directives from the Biden administration such as placing greater regulations on AI companies, investing in educating the future AI workforce, creating a public-private sector partnership with the US government & leading AI companies, and preventing AI technology from furthering existing inequalities.

Global Response to DeepSeek's Security Concerns

Quite a few US Government entities have banned the use of DeepSeek by their workers & service members. This includes the Navy, NASA, Congress, the Pentagon and the state of Texas. The US is not the only country being hawkish on DeepSeek, with Australia, Italy, the Netherlands, Taiwan, South Korea, and India taking steps to mitigate the security risks posed by the AI platform. Some countries such as Taiwan are just banning DeepSeek from government agencies & critical infrastructure organizations. Others, like Italy, have banned DeepSeek from their entire country. Italy’s privacy watchdog, Garante, is questioning how Italian user data is stored and utilized by the DeepSeek platform.

Economic Impact of DeepSeek

As DeepSeek became popular in the news, the US stock market saw bearish (the stock will go down in price) sentiment bringing the market into the red. This was mostly caused by negative sentiment on large blue-chip company stocks related to the AI industry such as NVIDIA and Broadcom. It should be noted that sentiment on these stocks has remained mainly bullish (the stock will go up in price) for quite a while. Barring any major events, buying pressure is likely to make a return in the market bringing back steady stock price increases. For example, at the time of writing this report, Nvidia stock has almost returned to its price before the media frenzy. A lot of negative sentiment on AI stocks may be due to excessive valuations of large technology stocks that are not being backed up with proportional earnings numbers.

Analyst Comments: (ResidentGood) A lot of the hype around DeepSeek may be overblown, however the security implications are relevant & important. Deploying this AI in western military/government organizations can have grave detriments to national security. For the every day user, DeepSeek is just another AI platform (with more vulnerabilities than usual) that has concerning implications for data privacy & overall cybersecurity. Just because a technology is made in a certain country does not mean a user should be any less cautious about protecting their privacy & mitigating vulnerabilities. As AI technology advances, it is important to develop ethical guardrails that prevent malicious actors from leveraging AI for criminal activity. This event also shows the massive impact AI has on other industries besides technology, especially finance and the public sector. There will surely be more news to come in the AI arms race.

[Major Vulnerability] Cybersecurity Risks Found in 2 Models of Contec Patient Monitors

The US Food and Drug Administration (FDA) issued a notice on January 30th about cybersecurity risks of 2 patient monitors manufactured by Contec. The vulnerability is due to the unencrypted transmission of patient data such as body temperature, oxygen saturation in the blood, and blood pressure. The FDA warned about unauthorized parties accessing this information & taking remote control of the monitor devices. They warned about a backdoor that could mean the device or network could be compromised. They also warned of potential data exfiltration if these devices are connected to the open internet. The devices could also be made to not function properly, causing issues in a medical environment. Finally, the FDA said that there were no known incidents or injuries related to this vulnerability. They advised healthcare IT workers to use only the local monitoring feature for these devices and to monitor each device for unusual/unauthorized activity.

CISA assigned 3 CVEs to the patient monitors: CVE-2024-12248, CVE-2025-0626, and CVE-2025-0683. Respectively they describe an out-of-bounds write vulnerability, a hidden backdoor, and a privacy leakage vulnerability.

Claroty published an article about the vulnerabilities stating that the “hidden backdoor” was more of an insecure design issue. They said this is not necessarily a malicious campaign to exfiltrate patient data, and more an accidental design side effect that allows data exfiltration and insecure updates to firmware. Team82 at Claroty also stated that to achieve Remote Code Execution, the devices would need to initiate their system upgrade routine which requires physical access to the devices themselves.

Analyst Comments: (ResidentGood) While these vulnerabilities do present some interesting opportunities for attackers, there seems to be some miscommunication surrounding them. While some of the risks presented originally are not real, there is still precedent to mitigate the vulnerabilities that the CMS8000 models present. Recommended mitigations for the organizations using these models of patient monitors include blocking all access to subnet 202.114.4.0/24 (because this is a publicly routable IP address, not a local IP address that would be safer to hard-code into the firmware) from the internal network. If it is possible to modify the default CMS network configuration, organizations should change the default IP for the CMS. If this is not possible and it is required to use the hardcoded IP address, network segmentation & static routing to ensure traffic only goes to your CMS instead of externally can help mitigate the risk. If you do not need the HL7 functionality of the monitors, you should block all outbound traffic to 202.114.4.120, preventing the leak of personal health & ID information. The ultimate remediation for these vulnerabilities are to replace the patient monitors with more securely designed devices until the vendor fixes these issues.

In November of 2024, ethical hackers found a vulnerability in Subaru’s STARLINK multimedia vehicle software. Information regarding the security flaw was just recently published on a blog by the researchers. The vulnerabilities stem from an administration panel for STARLINK that had a vulnerable resetPassword.json API allowing Subaru employees to reset their account passwords without a confirmation token. This allowed an attacker to take over an employee account, remove the client-side overlay to bypass 2FA and gain access to the STARLINK employee portal. With information such as a Last Name and License Plate, they were able to access all vehicles & accounts of customers in the US, Canada, and Japan. With this access they could remotely start/stop the engine, toggle the lock, retrieve other PII like physical address & emergency contacts, and access the past year of the vehicle’s stored location history. The attacker could add themselves as authorized vehicle users without the owner being notified.

The researchers said it was patched within 24 hours of disclosure and there are no known exploitations in the wild. In the blog they state the difficulty of securing vehicular systems because information is shared and used by so many different parties across various locations. View the full report for proof of concept.

Analyst Comments: (ResidentGood) This is an example of success stemming from businesses positively reacting to responsible vulnerability disclosure. The speed at which this was patched is impressive and very necessary. The amount of remote control this vulnerability gives attackers is huge and puts vehicle owners in danger. Sam Curry worked with a team of other researchers to discover a similar vulnerability in Kia systems. It is important to embrace responsible disclosure from security researchers in order to stay ahead of threat actors. It is important for security researchers to disclose these new vulnerabilities to companies first, allowing them time to fix them before publishing detailed writeups of the exploits. It is important to note that Subaru’s STARLINK systems is not related in any way to SpaceX’s Starlink service.

[Research] Vulnerabilities Found in Apple Chips Allow Unauthorized Access of Browser Data

Researchers from the Georgia Institute of Technology & Ruhr University Bochum have demonstrated 2 vulnerabilities in Apple hardware that could potentially expose sensitive application data to remote attackers:

1. Speculation via Load Address Prediction (SLAP):
Apple CPUs have a “Load Address Predictor” that improves processing speed by using previous memory access patterns to guess the next memory address the CPU will need data from. If it guesses wrong, this could expose data stored in nearby memory addresses from other applications that are not meant to be accessed. This allows an attackers code to potentially gain remote access to the data stored in those out-of-bound memory addresses. Read the full research paper on SLAP here.

2. False Load Output Predictions (FLOP):
This vulnerability leverages the “Load Value Predictor” where Apple M3/A17 CPUs and newer guess the data dependencies value returned by the memory subsystem on the next CPU core access before it’s actually returned. Read the full research paper on FLOP here.

The researchers said they did not find evidence of attackers exploiting this in the wild. Apple has publicly stated appreciation for the research and said they have been working on patching these vulnerabilities since they were notified in May & September of 2024. In a comment to BleepingComputer Apple said: “Based on our analysis, we do not believe this issue poses an immediate risk to our users.“

Analyst Comments: (ResidentGood) Thank you to SH3LL for sending this story to our CTI team. While these vulnerabilities occur at the hardware level, mitigations exist at the software level & user-behavior level. Many other browsers implement strong isolation of data between websites and processes, notably Firefox. The main issue is that Apple’s Safari browser does not have this isolation. Being the default Apple browser, it is widely used by many. If you click on a link in an email on your iPhone, chances are it will open in Safari.

Apple users can protect themselves from SLAP & FLOP by changing the default browser on their devices away from Safari. Update your devices frequently, and keep an eye out for when Apple releases patches for SLAP & FLOP. They could potentially disable JavaScript however that would break websites that require JavaScript. Educating yourself on how to avoid phishing campaigns & social engineering can help mitigate the risks presented by these vulnerabilities. Stay skeptical! Eventually it would be good to extend this data isolation between processes & threads down to the hardware level to render these attacks obsolete.

END REPORT


If you are interested in anything Cybersecurity, come check out our Discord

Sources:

Dismantling of DHS Advisory Committees

Exploiting AI in 2025: How Modern Threats Are Taking Advantage of LLMs

DeepSeek & The AI Chessboard: Where the Pieces Fall

Vulnerabilities Found in Apple Chips Allow Unauthorized Access of Browser Data