Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

BREAKING: Alleged Twitter Hacker Arrested

The alleged Twitter hacker has been arrested, The Wall Street Journal reports. Graham Ivan Clark, of Tampa, was arrested and charged as an adult on July 31, 2020. Clark faces 30 felony charges related to the hack, the report says.
The Twitter hack affected multiple celebrity and public official accounts, and tricked users into sending bitcoin to hackers. The Twitter attack raises serious economic, financial, political and national security concerns ahead of the 2020 U.S. Presidential Election.
How was Twitter security breached, who got hacked and what steps will the social media company take to further strengthen its platform? Here’s a regularly updated blog tracking the incident, Twitter’s investigation and corrective measures, and the high-stakes effort to keep social media secure.
Note: Blog originally published on July 16, 2020. Updated regularly thereafter with the latest investigation news.

Twitter Statements About Security Incident

In a July 18, 2020 statement about the security incident, Twitter indicated:
  • attackers targeted certain Twitter employees through a social engineering scheme.
  • The attackers successfully manipulated a small number of employees and used their credentials to access Twitter’s internal systems, including getting through two-factor protections.
  • 130 Twitter accounts were targeted.
  • For 45 of those accounts, the attackers were able to initiate a password reset, login to the account, and send Tweets.
  • For up to eight of the Twitter accounts involved, the attackers took the additional step of downloading the account’s information through our “Your Twitter Data” tool.
  • Twitter’s incident response team secured and revoked access to internal systems to prevent the attackers from further accessing the systems and the individual accounts.
  • For the 130 accounts that were targeted, attackers were not able to view previous account passwords, as those are not stored in plain text or available through the tools used in the attack. Attackers were able to view personal information including email addresses and phone numbers, which are displayed to some users of our internal support tools. In cases where an account was taken over by the attacker, they may have been able to view additional information. Our forensic investigation of these activities is still ongoing.

More Twitter Breach Investigation Updates:

  • Twitter said hackers who breached its systems likely read the direct messages of 36 accounts, including one belonging to an elected official in the Netherlands. SourceReuters, July 22, 2020.
  • More than a thousand Twitter employees and contractors as of early 2020 had access to internal tools that could change user account settings and hand control to others, making it hard to defend against the hacking that occurred in mid-July. SourceReuters, July 23, 2020.
  • The breach involved hackers using phone-based spear phishing. Essentially, hackers gained entry to Twitter’s network by reaching out to Twitter employees on their phones. SourceTwitter, July 30, 2020.
Twitter emphasized that the investigation is ongoing, and the details above could change.

Twitter Hacked: Information About the Breach

Note: Information below published on MSSP Alert on July 16, 2020 through July 17, 2020.
  • How Did Twitter Get Hacked/Breached?: A hacker allegedly gained access to a Twitter “admin” tool on the social media network that allowed them to hijack high-profile Twitter accounts to spread a cryptocurrency scam. SourceTechCrunch, July 16, 2020.
  • Which Twitter Accounts Got Hijacked?: The accounts of U.S. presidential candidate Joe Biden, reality TV star Kim Kardashian, former U.S. President Barack Obama, Tesla and Space-X founder Elon Musk, and Microsoft co-founder Bill Gates were allegedly victimized. SourceReuters, July 15, 2020.
  • How Many Twitter Accounts Were Victimized?: Hackers targeted about 130 accounts during the cyber attack this week. Twitter continues to assess whether the attackers were able to access private data of the targeted accounts. SourceReuters, July 16, 2020.
  • What did Twitter Initially Say About the Security Incident?: Twitter’s security account posted around 5:45 p.m. EDT on July 15, 2020 that the company was investigating the incident and taking steps to rectify it. Within roughly a half hour, the company took the extraordinary step of limiting posts from verified accounts with blue check marks, which Twitter generally designates for more prominent users. Twitter, late July 15, said it believed the hackers perpetrated the attack by targeting employees who had access to the company’s internal systems and tools. The hackers may have accessed information or engaged in other malicious activity, Twitter said, adding it was still investigating the incident. The company didn’t say how long the hackers had been able to access its internal systems. Twitter said it had limited access to internal systems in response to the hack and locked compromised accounts. SourceThe Wall Street Journal, July 15, 2020.
  • How is the U.S. Government Investigating the Twitter Hack?: The FBI’s San Francisco office has launched an investigation into the incident. SourceReuters, July 16, 2020.

Twitter Hacked: The Bigger Concerns

  • Why Should MSSPs Care?: At first, there was  concern that Twitter hackers may have bypassed two-factor authentication (2FA) security settings. But now, the concern has shifted to how hackers allegedly gained control of Twitter’s administration tool(s). Similarly, MSSP administration tools — including remote control and remote access software — have been popular hacker targets for infiltrating end-customer systems.
  • Why Are Regulators Concerned?: The Twitter breach raises serious questions and concerns — especially ahead of the 2020 U.S. Presidential Election. Hackers who gain control of social media administration tools can, in theory, spread misinformation that potentially manipulates financial markets, elections, international relations, protests, and overall confidence in political systems.
This remains a developing story. Check back for ongoing updates about the breach.

Sleeping with the Friend-Enemy: Mobile Apps and their SDKs

Third party SDKs can undermine the security of your mobile app, all unbeknownst to you. Mobile applications, including iOS, Android, and WinMo apps, are built using native code usually written by developer teams; however, a chunk of the code is always sourced from 3rd party SDKs (commercial or open source). Leveraging external components is very normal for mobile apps, as 99% of all apps have some sort of 3rd-party commercial or open source SDK embedded in the binary. So what is the problem here? The big issue is that 3rd-party SDKs have FULL access to the app 's private data, permissions, network connections, TLS sessions, etc. There is no separation nor sandbox between the app’s internal code and the 3rd-party SDK. Once the SDK is included, the SDK *is* the app too, which includes all commercial for-profit SDKs and as well as the two-person developer projects on GitHub.
  • Okay, so yeah, what is the problem here?
The problem is that 3rd-party SDKs, an unvetted, uncontrolled, and unknown source, have:
  1. Access to the entire app *and* all its data
    1. SDKs can read, write, or delete any data located in private storage
  2. Access to the app’s TLS layer
    1. SDKs can disable TLS for the entire app (to all endpoints, not just the SDK’s)
  3. Access to the parent’s app’s existing permission
    1. SDKs can pull data from the device, including end-user
      1. Contact Lists
      2. Geo-Location
      3. SMS Logs
      4. Photos
      5. Camera
      6. Microphone
      7. [any device permission the parent app has been granted]
  4. Basically, the SDK has access to anything the app has access to
An Illustration of the Issue
Let’s take a quick look at MDLive, a popular medical app that connects doctors with end-users in need of medical assistance. MDLive has 10+ third party SDKs in its iOS mobile app, which is very normal. One of its SDKs is called TestFairy, a popular tool to help developers distribute apps to internal teams, collect logs, solicit user feedback, obtain crash reports, and record videos of end-user activity. These features help developers improve their mobile apps from one release to the next.
Well, it turns out the 3rd Party SDK (TestFairy) has an awesome "Video Recording" feature that has significant security and privacy implications. According to federal courts in the state Florida, the MDLive app was taking screenshots of real end-user activity, which includes all presented data, and sending it to a 3rd party (TestFairy). Well, what actually happened is that the TestFairy SDK was configured to collect screenshots of live user activity (it just looked like MDLive since there is no distinction from the App and its SDKs). Since the MDLive app leverages medical data, this equates to a 3rd party SDK receiving ePHI data of several thousand MDLive users (on another apps, the data could very well be credit card numbers, social security numbers, account balances, etc.).
  • So what went wrong here?
MDLive's mobile developers chose a SDK to improve their work flow, which is the right thing to do. Unfortunately, they enabled a feature from the SDK that collects end-user data from live app sessions. Did the security team know that TestFairy was being used? Did the security team know that TestFairy is collecting screenshots with live end-user data and sending it to TestFairy's headquarters, which happened to be in Israel? As a reminder, developer choose a variety of SDKs to enhance their app on an everyday basis, which is nothing new. The problem is that 1 of the 10+ SDKs had a significant security issue associated with it, which no one knew about until the federal courts got involved.
Okay, how does this issue compare to other attack classes, within mobile app security or other platforms? In our opinion, this attack surface is considerable, considering the amount of data that can be compromised. Let’s compare the major attack classes from client/server apps, web apps, and mobile apps altogether, which is shown below in Table 1.0. We will compare buffer overflows, SQL InjectionCross-Site Scripting, and Mobile SDKs.
Table 1.0
4.png
As shown above, buffer overflows still rule the Win32 world; however, attacks on windows native apps are less common (and not as sexy anymore). SQL Injection still reigns strong for web apps, but notice SDKs actually have stronger impact on data than Cross-Site Scripting. While XSS and SDKs mirror each other in terms of the full control of data, developer sourced, and customer data, SDKs can gather large volumes of data with just one attack, where reflected XSS cannot (only persistent could). Furthermore, misbehaving SDKs are a bit harder to detect as several SDKs are legitimate, not evil at all. An enemy that looks like a friend is much harder to defend against than something known to be evil.
  • Okay, so what have we learned today?
SDKs are a major blind-spot for enterprise security teams, as the emerging attack surface can destroy a mobile app’s security model with behavior that developers perform everyday. While traditional app security teams focus on security flaws within the app, rightfully so, many of them are not aware of this attack surface at all. App security teams usually have 1) no idea which commercial/open source SDKs are bundled in the app 2) nor do they know which SDKs introduce security issues to the app.
More Real-world Examples
  • Okay, so where else is this happening?
Well, all over the place. To date, Data Theorem, Google, the FTC, and Fireeye have published most of the articles on this topic. A few examples are below:
  • Kochava SDK (Android)
    • Issue: Disables TLS Certificate Validation on the entire app and its connections
    • Source: Data Theorem, Inc.
  • Silverpush
    • Issue: Tracks user habits without the user knowledge or permission
    • Source: Federal Trade Commission
  • VPon
    • Issue: Records audio, captures videos, collects geo-location & contacts
    • Source: Google, Inc.
  • iBackdoor (Not a legitimate SDK, but rather a purposely built library to steal data)
    • Issue: Collects audio, screenshots, geo-location, and accessed data in private storage/iOS keychain
    • Source: Fireeye, Inc.
Please note, these four SDKs are not exhaustive, just a small sample.
  • "What did the president know, and when did he know it"
A famous quote by Howard Baker in the middle of the Watergate scandal. Now that you know of this new class of issues, how do your apps rank? Data Theorem scans the App Store/Google Play on a daily basis, so if you have any concerns about your apps or their SDKs, please feel free to contact us and we’ll let you know either way free-of-charge. A full list of apps that have security or privacy issues sourced from a third party SDKs are listed below:
NO, the sky is not falling. Data Theorem has dynamically scanned over 10,000 SDKs and like everything else, 80% of them are just fine. 80/20 rule applies here too, where 20% of the SDKs cause 80% of the problems. Many of these SDKs introduce security issues by mistake, while others were purposely built to attack mobile apps and their data.
  • What’s next, what should I be doing?
There are a few next steps here:
  1. Visibility is critical
    1. Enumerate the commercial SDKs and open source libraries in your app. This will remove the blind spot(s):
      1. Developers usually have a list of the SDKs somewhere, but not necessarily in a consolidated format
      2. Data Theorem provides real-time visibility for your mobile apps and it's 3rd-Party SDKs for free
  2. Continuous testing
    1. Each commercial SDK and/or open source library, and each version of it, needs to be evaluated for the following items:
      1. What data, if any, does the SDK pull from the app?
      2. What security issues, if any, does the SDK introduce to the app?
      3. If the SDK is commercial…
        1. What are the privacy terms of the SDK?
        2. What security audits have been performed?
      4. If the library is open source…
        1. Who has reviewed the code for security flaws?
As you can see from the above, this is no easy task. Item number 2 is easy on paper, but hard to implement at-scale since it needs to be continuous and completed for every app’s release. Despite the challenge, it is something that must be done, as an absence of any action could very well lead to the compromise of large volumes of data (both consume or enterprise data).

By: Data Theorem courtesy of Peerlyst