Privacy | CyberScoop https://cyberscoop.com/news/privacy/ Fri, 30 Jun 2023 18:49:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://cyberscoop.com/wp-content/uploads/sites/3/2023/01/cropped-cs_favicon-2.png?w=32 Privacy | CyberScoop https://cyberscoop.com/news/privacy/ 32 32 OpenAI lawsuit reignites privacy debate over data scraping https://cyberscoop.com/openai-lawsuit-privacy-data-scraping/ Fri, 30 Jun 2023 18:40:05 +0000 https://cyberscoop.com/?p=75179 The lawsuit against the generative AI company raises questions about the legal grey area of web scraping in the United States.

The post OpenAI lawsuit reignites privacy debate over data scraping appeared first on CyberScoop.

]]>
The lawsuit filed this week in California against OpenAI, the artificial intelligence company behind the wildly popular ChatGPT app, is rekindling a decade-old debate about the legal and ethical concerns about tech companies scraping as much information as possible about everything — and everyone — on the web.

The suit filed on behalf of 16 clients alleges an array of harms from copyright violations to wiretapping due to Open AI’s data collection practices, adding to a growing list of legal challenges against companies repurposing or reusing images, personal information, code and other data for their own purposes.

Last November, coders sued GitHub along with its parent company Microsoft and partner OpenAI over a tool known as CoPilot that uses AI to generate code. The coders argued the companies violated the licensing agreements for the code. In February, Getty Images sued Stability AI for allegedly infringing the copyright of more than 12 million images.

As the lawsuit notes, AI companies deploy data scraping technology at a massive scale. The race between every major tech company and a growing pack of startups to develop new AI technologies, experts say, has also accelerated not just the scale of web scraping but the potential harms that come with it. Experts note that while web scraping can have benefits to society, such as business transparency and academic research, it can also come with harms, such as cybersecurity risks and scammers harvesting sensitive information for fraud.

“The volume with which they’re going out across the web and scraping code and scraping data and using that to train their algorithms raises an array of legal issues,” said Lee Tiedrich, distinguished faculty fellow in ethical technology at Duke University. “Certainly, to the extent that privacy and other personally identifiable information are involved, it raises a whole host of privacy issues.”

Those privacy concerns are the centerpiece of the recent California lawsuit, which accuses OpenAI of scraping the web to steal “private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”

“They’re taking personal data that has been shared for one purpose and using it for a completely different purpose without the consent of those who shared the data,” said Timothy Edgar, professor of practice of computer science at Brown University. “It is by definition, a privacy violation, or at least an ethical violation, and it might be a legal violation.”

The ways that AI companies may use that data to train their models could lead to unforeseen consequences for those whose privacy has been violated, such as having that information surface in a generated response, said Edgar. And it will be very hard for those whose privacy has been violated to claw back that data.

“It’s going to become a whack-a-mole situation where people are trying to go after each company collecting our information to try to do something about it,” said Megan Iorio, senior counsel at Electronic Privacy Information Center. “It will be a very similar to a situation we have with data brokers where it’s just impossible to control your information.”

Data scraping cases have a long history in the U.S. and go all the way up to the Supreme Court. In November 2022, the court heard a six-year-long case from LinkedIn accusing data company HiQ Labs of violating the Computer Fraud and Abuse Act by scraping profiles from the networking website to build its product. The high court denied the claim that the scraping amounted to hacking and sent the case back to a lower court where it was eventually resolved. ClearView AI, a facial recognition company, has been sued for violating privacy laws in Europe and the state of Illinois for its practice of trawling the web to build its database of more than 20 billion images. It settled the ACLU’s lawsuit in Illinois in May 2022 by promising to stop selling the database to private companies.

Now, LinkedIn’s parent company Microsoft is on the other side of the courtroom, named as a plaintiff in three different related lawsuits against OpenAI. “The whole issue of data scraping and code scraping was like a crescendo that kept getting louder. It kept growing and growing,” said Tiedrich, who called the AI lawsuit “inevitable.”

The California suit against OpenAI combines the arguments of many of these lawsuits in a whopping 157-page document. Tiedrich says that while there have been recent court cases weighing in on fair use of materials, something relevant to the copyright aspects of the OpenAI lawsuit, the legality of data scraping is full of grey areas for courts and lawmakers to resolve.

“A lot of the big AI companies are doing data scraping, but data scraping has been around. There are cases, going back 20 years ago, to scraping airline information,” said Tiedrich. “So I think it’s fair to say that the decision could have broader implications than just if it gets to a judicial decision than just AI.”

The OpenAI lawsuit’s privacy arguments might be even more difficult to uphold. Iorio, who with EPIC filed a friend of the court brief in the LinkedIn case, said the plaintiffs suing OpenAI are in a better position to show those harms since they are individuals, not a company. However, the limitation of federal privacy laws makes it hard to bring a data scraping case on those grounds, she said. Of the three privacy statutes cited by the lawsuit, only the Illinois privacy law covers publicly available information of all users. (The lawsuit also cites the Children’s Online Privacy Protection Rule, which protects users under 13.)

That leaves scrapers, whether they are tech giants or cyber criminals, with a lot of leeway. “Without a comprehensive privacy law that does not have a blanket exemption for publicly available data, we have the danger here of this country becoming a safe haven for malicious web scrapers,” said Edgar.

The post OpenAI lawsuit reignites privacy debate over data scraping appeared first on CyberScoop.

]]>
A year after Dobbs, federal privacy legislation to protect abortion seekers remains stalled https://cyberscoop.com/dobbs-privacy-legislation-abortion-congress/ Thu, 22 Jun 2023 19:58:29 +0000 https://cyberscoop.com/?p=74944 Legislative efforts have suffered due to little Republican interest and a lack of urgency in Congress to address privacy issues.

The post A year after Dobbs, federal privacy legislation to protect abortion seekers remains stalled appeared first on CyberScoop.

]]>
The Supreme Court’s decision last June to reverse Roe v. Wade sparked new worries that the massive amount of digital health and location data that companies collect could provide a deep well of evidence for states seeking to track and potentially arrest anyone seeking or receiving an abortion.

Some of those fears have been realized. After the Supreme Court’s Dobbs decision overturning a constitutional right to abortion, news surfaced that Nebraska police served Facebook’s parent company Meta a search warrant for messages that turned out to be related to an illegal abortion. In March, a Texas man filed a civil lawsuit against three women he alleges helped his wife obtain an abortion. The lawsuit cited unencrypted text messages.

But even though the ruling ushered in urgent pleas from advocacy groups and many lawmakers for legislation to safeguard reproductive health data that can be easily obtained by data brokers or law enforcement, there remains little movement in Washington to pass legislation to strengthen U.S. privacy protections.

“I do think that there is a greater awareness now,” Rep. Sara Jacobs, D-Calif., told CyberScoop.  “As a young person, I think it’s taken Congress too long to catch up with the American people in understanding these vulnerabilities.”

Jacobs, who introduced the “My Body My Data Act” last year is one of the lawmakers who has led the conversation about reproductive and sexual health privacy in the wake of Dobbs. The legislation limits the reproductive and sexual health data that entities can collect and protects personal data such as cellphone data and search engine history not currently covered by the landmark health data protection law, the Health Insurance Portability and Accountability Act of 1996.

Jacobs reintroduced the legislation in May with 91 cosponsors in the House and 13 sponsors in the Senate. She is still seeking privacy-minded Republicans to co-sponsor the legislation but acknowledged the difficulty in getting bipartisan support.

“The fact of the matter is this isn’t just about abortion,” said Jacobs. “It’s all sexual and reproductive health data. So if you’re a 70-year-old, Republican man who doesn’t want your wife to know you’re searching about gonorrhea on Google this does protect you, too.”

Rep. Anna Eshoo, D-Calif., a co-sponsor of the bill, said that while Republicans aren’t going to take up the bill “it’s important to build support for these policies so that the minute that we take over the majority that we’re ready to go.”

The legislation has gained the support of a number of civil liberties and reproductive rights groups. “With the GOP dead set on criminalizing abortion, it is critical that we do everything we can to protect the data and privacy of those seeking and providing care. We’ve seen our champions at the federal and state levels spring into action to do so, including U.S. Representative Sara Jacobs … or California state Assemblymember Rebecca Bauer-Kahan’s AB 254—which ensures the privacy of individuals when they use apps and websites that provide reproductive health services,” NARAL Pro-Choice America Communications Director Ally Boguhn wrote to CyberScoop in an email.

Where Washington has seen more success in protecting reproductive health data is in regulatory action helmed by the White House. For instance, President Biden last summer signed an executive order tasking the Federal Trade Commission and the Department of Health and Human Services with protecting abortion services. In April, HHS proposed a rule that would strengthen existing privacy protections under the Health Insurance Portability and Accountability Act by prohibiting healthcare providers from disclosing reproductive healthcare data investigating an individual for a legal abortion. Last week, 24 state attorneys general threw their support behind the proposed rule.

States with pro-choice leadership have also pushed through a cohort of laws around reproductive health, such as shield laws in New York, Washington and California barring entities from sharing data about legal abortions with states conducting criminal investigations into the behavior. California lawmakers are also seeking to ban reverse search warrants that could ensnare abortion seekers, as CyberScoop first reported.

“So far, we haven’t seen indicators that these types of shield laws have actually proved necessary… but we expect it is certainly just a matter of time before they do,” said Jake Laperruque, deputy director at the Security and Surveillance Project for the Center For Democracy & Technology.

Experts note that there could be several reasons there haven’t been more high-profile cases of digital evidence showing up in abortion criminal cases. One is that companies that receive law enforcement requests are often subject to an initial gag order. Secondly, abortion-related investigations may not be explicitly labeled and may instead be charged as a crime like murder or child endangerment.

Even with some strides by states and the Biden administration, without Congress to codify abortion and privacy rights, many state and agency protections fall short. For instance, the HIPAA rulemaking only applies to states where abortion is legal. Applying the protections all states would take an act of Congress. Rep. Jacobs and Eshoo’s Secure Access for Essential Reproductive (SAFER) Health Act, which served as a model for the HIPAA rule, would apply to all reproductive health information regardless of local laws.

Democrats in both chambers are expected to force a vote on legislation protecting access to abortion nationally ahead of the Dobbs anniversary.

Moreover, legislative solutions tailored to health data ignore a world of data that can also be used to incriminate abortion-seekers, such as private messages and geolocation history. Protecting other forms of metadata requires more comprehensive federal privacy legislation. After Dobbs, many civil society groups redoubled support of the American Data Privacy and Protection Act, comprehensive privacy legislation that passed out of its House committee last summer but has yet to be reintroduced this year. Eshoo and Rep. Zoe Lofgren, D-Calif., introduced their own comprehensive privacy legislation, the Online Privacy Act, in April.

“ADPPA would go a long way to not only raise the bar for protections around sensitive data, including health data,” Andrew Crawford, senior policy counsel on the Privacy and Data Project at CDT, told CyberScoop. “Our approach to data privacy burdens the consumer far too much and does not place much of a burden requirement on companies to act as responsible stewards.”

Crawford, who recently released with CDT a guide for best practices for companies to protect reproductive health data, says that the private sector also plays an important role in protecting consumers. Concerns about reproductive health privacy have put pressure on companies to take steps to reduce harm. For instance, Google last summer promised to stop collecting users’ location data for visit to reproductive health clinics. Fertility and reproductive tracking apps at the center of new fears also responded with new privacy modes, such as Flo’s anonymous mode which allows users to track on the app without sharing data like their name or IP address.

Crawford emphasized that any company that collects information like location data could find itself on the receiving end of a law enforcement request and that the kinds of protections that CDT is pushing apply to every company.

“We would like folks to embrace these best practices right now and they don’t necessarily need legislation to do it,” said Crawford.

Lawmakers aren’t giving up, however. “I think together with my colleagues we have built legislative products that are worthy of the support of members but most importantly they would be laws that would fully protect women in our country given the Dobbs decision,” Eshoo told CyberScoop. “It’s a different era. It’s a different time.”

The post A year after Dobbs, federal privacy legislation to protect abortion seekers remains stalled appeared first on CyberScoop.

]]>
FTC accuses genetic testing company of exposing sensitive health data https://cyberscoop.com/ftc-1healthio-health-data-privacy/ Fri, 16 Jun 2023 16:48:31 +0000 https://cyberscoop.com/?p=74887 The case is the latest in a series of FTC enforcement actions focused on health data privacy and the first involving genetic information.

The post FTC accuses genetic testing company of exposing sensitive health data appeared first on CyberScoop.

]]>
The Federal Trade Commission on Friday accused the genetic health testing firm 1health.io of failing to protect sensitive genetic and health data, the latest in a series of FTC enforcement actions focused on health data privacy and the first involving genetic information.

The FTC alleges that the California-based 1health previously known as Vitagene, deceived customers about its privacy policy, retroactively changed that policy and misled customers about its process for deleting data. The company will pay $75,000 to the FTC for consumer refunds as part of a settlement with the agency.

“Companies that try to change the rules of the game by re-writing their privacy policy are on notice,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement. “The FTC Act prohibits companies from unilaterally applying material privacy policy changes to previously collected data.”

Vitagene’s DNA test kits provide reports that include personal information such as ancestry and level of risk for certain health problems, such as high triglycerides and obesity. According to its website, 1health provides testing to corporate and government clients.

According to the complaint, Vitagene stored nearly 2,400 records belonging to at least 227 consumers in publicly accessible data buckets on Amazon Web Services, exposing sensitive consumer and raw genetic data, some of which was tied to consumers’ names. Vitagene claimed that it did not store DNA results connected with identifying information.

According to the FTC, Vitagene was warned three times that the unencrypted health and user data was publicly accessible but only fixed the issue and notified customers in 2019 after a security researcher shared their findings with the media.

The FTC accused the company of deceiving customers by failing to follow through with its promises to customers that they could delete their data at any time. The company later began sharing customer information with third parties without notifying customers of the change.

As part of the proposed order, 1health will be prohibited from sharing health data with third parties without obtaining affirmative customer consent. It must also implement a new security program to address the security concerns in the complaint and notify the FTC about any incidents of unauthorized disclosures of consumer health data. 1health will be required to destroy all DNA samples retained for more than 180 days.

The proposed agreement will be made available for public comment for 30 days before the agency reaches a final settlement.

1health CEO Mehdi Maghsoodnia called the FTC investigation a “case of extraordinary government overreach.”

In a statement, Maghsoodnia said the company first learned in July 2019 that a “small number of customer files had been inadvertently stored in a publicly accessible location” but that the company has no evidence they were “improperly accessed.”

“In response, the FTC launched an investigation which has now dragged on for nearly four years,” Maghsoodnia said. “Ultimately, we disagree with many of the FTC’s conclusions. But we look forward to finally putting this matter behind us.”

Updated June 16, 2023: To include comment from 1health.

The post FTC accuses genetic testing company of exposing sensitive health data appeared first on CyberScoop.

]]>
Congress and intelligence officials spar over surveillance reforms https://cyberscoop.com/congress-fbi-section-702/ Tue, 13 Jun 2023 18:28:30 +0000 https://cyberscoop.com/?p=74782 Member of the Senate Judiciary Committee remain unconvinced that existing reforms are sufficient to address abuse of surveillance authorities.

The post Congress and intelligence officials spar over surveillance reforms appeared first on CyberScoop.

]]>
Lawmakers and U.S. intelligence officials clashed at a Senate Judiciary hearing Tuesday over how to reform a controversial surveillance program set to sunset at the end of this year, setting the stage for a difficult legislative battle to renew or potentially reform the law.

Representatives of the Justice Department and FBI made the case that the long history of abuses linked to Section 702 of the Foreign Intelligence Surveillance Act are already being addressed by significant new reforms instituted in the last two years. But several members of the Judiciary Committee questioned whether these reforms go far enough and pressed witnesses about potentially more serious reforms, including a warrant requirement for using the sensitive intelligence data.

“I will only support the reauthorization of Section 702 if there are significant, significant reforms. And that means first and foremost, addressing the warrantless surveillance of Americans in violation of the Fourth Amendment,” Senate Judiciary Chair Dick Durbin, D-Ill., said in his opening statement.

The hearing sets up what is an uphill battle for the Biden administration to get Congress to renew the authority without changes. The administration and its surrogates insist that failing to renew the law would have grave national security consequences. Ahead of Tuesday’s hearing, Biden administration officials detailed several newly declassified examples of Section 702’s usefulness in combating cyber operations and narcotics trafficking.

But that argument has so far failed to gain traction on the Hill. Lawmakers at Tuesday’s hearing were largely united in opposing a clean reauthorization, arguing that the intelligence community hasn’t shown that it can self-correct a history of serious abuses or show that current systems don’t merit greater reforms.

“Why should we ever trust the FBI and the DOJ again to police themselves under FISA, when they’ve shown us repeatedly, for more than a decade, that they cannot be trusted to do so?” asked Sen. Mike Lee, R-Utah.

The Judiciary Committee members aired a variety of reform proposals, including a warrant requirement for Section 702, adding an “adversarial” process to the FISA system and assigning amicus curiae to targeted individuals who can otherwise not challenge their surveillance — all proposals that Tuesday’s witnesses opposed.

At the heart of lawmakers’ concerns is the FBI’s use of Section 702, which is designed to collect data belonging to foreign intelligence targets whose communications transit U.S. communications infrastructure, to query data incidentally collected on Americans. Committee members raised concerns about the FBI’s history of abusing incidental collection, citing a court ruling declassified last month that showed that the FBI misused the powerful surveillance tool more than 278,000 times.

The Justice Department’s Assistant Attorney General for National Security Matt Olsen said that these abuses predate reforms undertaken by the agency in 2021 and 2022 and that the bureau’s policies would prevent them from recurring.

These reforms include requiring agents to opt-in to search data, something that was a driving factor in reducing U.S. person queries more than 90% between 2021 and 2022, Olsen said. The Foreign Intelligence Surveillance Court is currently carrying out a declassification process for a 2023 opinion that identifies “some additional improvements in FBI compliance,” Olsen said.

On Tuesday, FBI Deputy Director Paul Abbate announced a pair of new compliance measures that the agency is putting forward as it tries to reduce FISA abuse. The first is a three-strike policy for query-related incidents that could lead to an agent’s dismissal. The second involves evaluations that can affect performance ratings and promotions for agency leaders monitoring 702 compliance in their divisions.

Abbate told Sen. John Cornyn, R-Texas, that the bureau would welcome codifying reforms already in place into law.

Civil liberties advocates said these reforms fail to address the surveillance abuses — including the collection of data belonging to racial justice protesters and political donors — committed under Section 702.

“The new items the FBI touted at the hearing are wholly inadequate, and out of touch with how serious these abuses are,” said Jake Laperruque, the director of the Security and Surveillance Project at the Center for Democracy and Technology, a group that is calling for FISA reforms.

Tuesday’s hearing previews what is likely to be a significant clash between the Biden administration and Congress over the possibility of a warrant requirement for U.S. person queries of Section 702 data. The reform is one that both lawmakers at the hearing largely supported but the administration has opposed. On Monday a senior administration official said a warrant requirement would have “very serious national security costs.”

Intelligence agency officials testifying in front of Congress shared those concerns. “The reason for not requiring a warrant is that this is lawfully collected information that is in the FBI holdings,” said the Justice Department’s Olsen.

But lawmakers expressed skepticism about the FBI’s argument.

“The U.S .person query aspect of this is really concerning to the Congress,” said Sen. Jon Ossoff, D-Georgia. “I don’t think you’ve effectively made the case that there shouldn’t be a warrant requirement whether or not it is constitutionally required.”

The post Congress and intelligence officials spar over surveillance reforms appeared first on CyberScoop.

]]>
AI chatbots want your geolocation data. Privacy experts say beware. https://cyberscoop.com/ai-chatbots-privacy-geolocation-data-google/ Thu, 08 Jun 2023 20:46:24 +0000 https://cyberscoop.com/?p=74661 Sharing any form of personal data with generative AI models can be risky, experts say.

The post AI chatbots want your geolocation data. Privacy experts say beware. appeared first on CyberScoop.

]]>
When Google’s artificial intelligence chatbot Bard recently began asking for precise geolocation information, it may not have seemed unusual to many based on the tech giant’s propensity to want to know as much as possible about everyone.

Google’s suite of products such as maps and search also prompt users to give up this kind of information. But privacy experts caution that the request from its AI chatbot represents a growing creep in data collection by large language models that could lead to an array of potential privacy harms.

“There’s a whole host of reasons to be concerned about the security of location data and its implications for the privacy of users of the system,” said Sarah Myers West, managing director at the AI Now Institute, a research institute that studies the social implications of AI.

That includes the potential subpoenaing of that data by law enforcement, a concern that has become especially pronounced in connection to growing worries about how law enforcement may access geolocation data in cases criminalizing access to abortion. The abuse or breach of geolocation data can also lead to other harms, such as stalking.

Concerns about sharing location data with AI models speak to the “wild west” nature of a rapidly growing industry that is beholden to few regulations and largely opaque to consumers and lawmakers. Consumer technologies such as Bard are less than a year old, making it largely unclear what repercussions they could have for privacy down the line.

The rapid growth in the industry has left regulators in the U.S. and abroad sorting out how the technology exists under current privacy regulations. OpenAI, a leader in the field, landed in hot water in Italy earlier this year after the country’s data protection authority accused it of violating the EU’s data protection rules. More recently, U.S. regulators have warned AI companies “sprinting” to train their models on more and more data that existing consumer protections still apply and failing to heed them could lead to enforcement.

According to Google’s privacy policy for Bard, Google uses the data it collects — including information about locations — “to provide, improve, and develop Google products and services and machine learning technologies, including Google’s enterprise products such as Google Cloud.”

“Bard activity is stored by default for up to 18 months, which a user can change to 3 or 36 months at any time on their My Activity page,” Google said in a statement. Google’s privacy policy says it may share data with third parties including law enforcement.

OpenAI’s privacy policy said that it may share geolocation data with law enforcement, but it’s not clear in ChatGPT’s user policy if or in what circumstances ChatGPT collects this data. OpenAI did not respond to a request for clarification.

Precise geolocation data isn’t the only form of location data that companies collect. For instance, Bard as well as its competitor OpenAI’s ChatGPT also collects IP address data, which reveals geolocation information but not precise physical locations.

However, detailed geolocation data is “substantially more sensitive,” because it can be used to track your exact movements, explains Ben Winters, senior counsel at the Electronic Privacy Information Center, a nonprofit advocacy group.

Location data is considered so sensitive that some members of Congress have sought to ban the sale of location data to data brokers. In the wake of the Dobbs decision that overturned Roe v. Wade, Google itself pledged to wipe location data from Maps users’ visits to reproductive health clinics though recent reporting shows deletions have been inconsistent.

Sharing any form of personal data with generative AI models can be risky, experts say. In March, OpenAI took ChatGPT offline to fix a bug that allowed users to view prompts from other user’s chats and, in some cases, payment-related information of subscribers. It’s also unclear in many cases where data used for training large language models could end up or if it might be regurgitated to other users down the road.

Major companies including Apple and Samsung have restricted their employees’ use of the tools over fear that it could result in the leak of trade secrets. Leading lawmakers have also pressed AI companies on what steps they take to secure sensitive user data from misuse and breaches.

“This technology is fairly nascent,” said Myers West from the AI Now Institute. “And I just don’t think that they themselves have fully dialed in the privacy-preserving capabilities.”

In response to some of these concerns, OpenAI in April introduced data control features for users, including allowing them to turn off chat history. Google Bard also gives users the option to review and delete their chat history.

Experts note that Google’s grab at geolocation data also plays into competition concerns around the AI industry in that big tech companies will be able to use the burgeoning generative AI market to even further bulk up data collection on consumers to get a competitive edge.

“I think that there is a trend toward increased density of data collection,” said Winters, noting that Google’s move could give the rest of the industry a reason to ramp up data collection.

Winters said users should be cautious about models that collect anything beyond their input and account information and those that say they use the data for any purposes beyond providing an output or potentially improving the model.

While regulations around large language models are still nascent, some regulators have already issued warning shots to the industry to heed.

The post AI chatbots want your geolocation data. Privacy experts say beware. appeared first on CyberScoop.

]]>
The White House says Section 702 is critical for cybersecurity, yet public evidence is sparse https://cyberscoop.com/white-house-section-702-fisa-surveillance/ Fri, 02 Jun 2023 19:14:33 +0000 https://cyberscoop.com/?p=74370 An FBI official told CyberScoop that a "plurality" of Section 702 searches pertain to investigations into nation-state cyberattacks.

The post The White House says Section 702 is critical for cybersecurity, yet public evidence is sparse appeared first on CyberScoop.

]]>
Since the Biden administration came out in favor of reauthorizing Section 702 of the Foreign Intelligence Surveillance Act in February, the intelligence community has pointed to the growing threat of foreign cyberattacks on the U.S. as a key argument in favor of the controversial surveillance tool.

Officials have made broad and general declarations, pointing to wide-ranging applications that include thwarting multiple ransomware attacks against U.S. critical infrastructure, finding out a foreign adversary had hacked sensitive information related to the American military and uncovering a cyberattack against critical federal systems.

Yet, 15 years into Section 702’s history, declassified examples of thwarting cyberattacks are sparse. In the little over three months that the Biden administration has been publicly advocating for the renewal of Section 702, it hasn’t mentioned a single specific public incident where Section 702 was used, despite a term marked by both ample cyber attacks and well-publicized takedowns of foreign hackers.

That lack of transparency and specificity doesn’t appear to be helping the Biden administration in what will likely be an uphill battle for Congress to reauthorize the authority before it sunsets in December. Even some of the authority’s greatest supporters have expressed frustration.

“Whether it’s helping to identify victims so they can be notified of the attack or helping to identify ransomware actors, 702 has been invaluable over the past several years,” Sen. Mark Warner, D-Va., told CyberScoop in an email. “However, I am frustrated that more of these compelling examples have not yet been made public.”

Warner’s office confirmed that the intelligence community has shared examples of the tool’s cyber significance in classified settings but declined to elaborate.

“While it’s important that we do not risk sources and methods, it is also critical that we explain to the American people what will be lost and how they would be increasingly vulnerable to cybercriminals and foreign governments if this authority were allowed to expire,” the Senate Intelligence chairman wrote.

Adam Hickey, former assistant attorney general of the Justice Department’s national security division, echoed Warner’s concerns. “I think they’re fighting with one hand behind their back,” said Hickey, now a partner at the law firm Mayer Brown. “On the one hand, you don’t want the very people who pose a threat to understand your capabilities, because they will work around them … On the other hand, you don’t want to be so careful to avoid that risk that you lose the very authority itself.”

The reticence also isn’t helping the civil liberties community, either, who have challenged the intelligence community’s persistent claims that any reforms to Section 702 that slow down investigators would imperil America’s national security.

“If that’s what the FBI is going to say — not only is it useful for cyber, but it’s useful in this preventive way, this very rapid way — I think this claim needs to be able to be backed up with some examples,” said Jake Laperruque, deputy director at the Security and Surveillance Project for the Center For Democracy & Technology.

Section 702 was first passed in 2008 as an amendment to FISA, pitched initially as a key tool in America’s fight against terrorism. The authority allows the U.S. government to collect the U.S.-based communications of non-Americans outside the country. The collection of the data of U.S. citizens using Section 702 is prohibited, but such data is often swept up in the surveillance in “incidental collection.” This data can be searched by the FBI under certain statutory requirements.

While the amount of FBI searches of 702 data has fluctuated over time, the amount of those searches related to cybersecurity has steadily increased. In a recent interview with CyberScoop, a senior FBI adviser confirmed that “about half” or a “plurality” of Section 702 database searches made by the agency today relate to the investigation of malicious, state-sponsored cyber attacks. While the adviser couldn’t say how much of an increase that was from previous years, they said it was reflective of an overall shift in the agency’s work toward more cyber investigations.

“Our use of the authority in the FBI and across the intelligence community is weighted a lot more heavily towards cyber now than it was five years ago,” the senior FBI adviser said. “Part of that use of this authority is reflective of its value, and the fact that we are just doing more work in this field and we’re seeing cyber threats increase over time.”

While the FBI adviser couldn’t share any specific examples, there is some limited data about how Section 702 data has shown up in cyber investigations. For instance, in its 2022 annual transparency report the ODNI wrote that of the 3.4 million searches made by the FBI in 2021, nearly two million were related to an investigation into an alleged attempt by Russian hackers to break into critical infrastructure. The searches helped to identify potential victims, officials said at the time.

The number of FBI searches declined dramatically in 2022, in part due to a new methodology used by the FBI to count searches.

“Cyberattacks happen at a larger scale. And therefore, the amount of information collected and queried on cyber attacks is just proportionately larger,” said Tom Bossert, the former United State Homeland Security adviser under the Trump administration. “You can imagine hundreds of thousands of attempted cyber attacks in any given period of time, and perhaps only five terrorist phone calls in that same period.”

In its early days, Section 702 was branded as a powerful counter-terrorism tool, reflecting the intelligence community’s focus at the time. In fact, some of the program’s biggest declassified successes involve foiling terrorist plots and taking down their leaders. Most recently, last summer Section 702 intelligence led to a successful operation against al-Qaeda leader Ayman al-Zawahiri.

It was only in 2017 amidst the last renewal debate that cybersecurity began to take a more leading role, with examples of thwarted ransomware attempts eclipsing references to ISIS and other terrorist cells. Now, it often takes top billing when discussing the threats that nation-states pose to the homeland. In its 2023 annual threats assessment, the Office of the Director of National Intelligence put China, Russia, North Korea and Iran and their cyber capabilities among the leading threats to the nation.

Bossert, who was in charge of the Trump administration’s efforts to secure a reauthorization in 2017, sees the new strategy in part as reflective of the national security community’s shifting focus. “I think a lot of people will perceive the cyber threat to be real and ever-present. And fewer people find the terrorist threat to be as urgent,” he said. “And I’d like to think that’s because we’ve spent 20 years confronting that problem and putting controls in place.”

Officials say part of the reason Section 702 has become so valuable in thwarting foreign actors is the complicated nature of cyberattacks. In the majority of cases, attackers use U.S. infrastructure as a lily pad into domestic targets. Intelligence officials have often pointed to this as a challenge when trying to follow the activity of foreign actors onto domestic soil, noting it as a “blind spot” that contributed to the failure to detect Russian hackers during the SolarWinds attack.

Section 702, they say, fills restores that visibility. “It is an authority that lets us do collection against a known foreign entity who chooses to use U.S. infrastructure,” NSA director of cybersecurity Rob Joyce told a crowd at the RSA Conference in April. “And so it makes sure that we don’t afford the same protections to those foreign malicious actors who are on our infrastructure as we do the Americans who live here.”

“I can’t do cybersecurity at the scope and scale we do it today without that authority,” he added.

The FBI and NSA aren’t alone in praising the tool. This week a senior state department official spoke about how the tool is instrumental in informing the work of U.S. diplomats, including cybersecurity issues such as North Korean IT fraud.

One potential stakeholder the Biden administration has yet to seriously court in the fight to renew Section 702 is industry. The senior FBI adviser stressed how failure to renew the authority would hurt its ability to advise chief information security officers, inundated with warnings about vulnerabilities, about which specific threats are most urgent.

“This is one of those things that lets us reach out to specific sectors and even specific companies to say, look, this specific vulnerability is one you want to take care of right now because we’re seeing certain types of actors targeting companies, companies like you, using that,” the senior FBI adviser said. “We’re going to have a severely constricted optic in all those things if we’re forced to rely solely on other tools.”

Former general counsel of the National Security Agency Stewart Baker has made the case that the intelligence community should do more to demonstrate to industry how they can benefit from Section 702. “If I were a CISO, I’d want to weigh in on the kinds of warnings, the kinds of uses of this intelligence in real-time, that would be particularly useful to me.”

Businesses need to understand that if Section 702 goes away, so does that intelligence, says Bossert. “They shouldn’t just think of this as a national security threat. They should think of this as an enterprise threat to their company. And they should view the US government as a potential partner,” he said. “If they expect the US government to continue to be a reliable partner…they have to understand that the underlying information that they have to share is in the government’s holdings because of authorities like 702.”

The senior FBI adviser told CyberScoop that the agency is looking at ways to increase industry engagement on the subject. “There’s a variety of different stakeholders here. And industry, particularly when we’re talking about cyber, is a very important one,” the senior FBI adviser said “So that is something that we are going to take a look at going forward about how we can start getting them engaged now that this is really starting to bubble up to the top of the public conversation as well as the conversation on Capitol Hill and in other stakeholder constituencies.”

Even if there were more examples, it’s unclear if Section 702’s purported value in preventing these attacks can overcome the program’s many criticisms, both from lawmakers wielding the power to reauthorize it and civil liberties groups seeking to reform the program. Most of the political pushback against the authority centers around concerns about well-documented abuses of America’s civil liberties, public examples of which have nothing to do with ransomware or foreign actors infiltrating critical infrastructure.

For instance, a recently declassified 2022 U.S. court ruling found that the FBI had improperly searched for information on Americans in the FISA database 278,000 times, including to spy on political campaigns and protesters. The report sparked outrage from both leading Democrats and Republicans who insist that the program can’t be reauthorized without reforms.

(The FBI argues that it has implemented new compliance measures since those searches occurred to cut down on misuse.)

Officials advocating for Section 702’s reauthorization have been vague about what reforms they would be willing to discuss, instead emphasizing that changes should not diminish the tool’s effectiveness. The reforms sought by advocates and lawmakers may do just that, at least in the eyes of the intelligence community. For instance, the senior FBI adviser said a warrant requirement, one of the top asks from reformers, would make it difficult for the agency to act swiftly to notify ransomware victims.

CDT’s Laperruque noted that courts have long recognized emergency exceptions to the warrant process. Reforms such as adding a warrant requirement to Section 702, which CDT and other groups are advocating for, wouldn’t change that.

“That’s not going to stop Section 702 from being used for cyber,” said Laperruque. “It’s going to stop 702 from being used on Black Lives Matter and members of Congress, which is what we’ve seen 702 used for in recent years.”

The post The White House says Section 702 is critical for cybersecurity, yet public evidence is sparse appeared first on CyberScoop.

]]>
Section 702 data led to State Department warnings about North Korean IT scams, official says https://cyberscoop.com/section-702-fisa-state-north-korea/ Tue, 30 May 2023 19:42:10 +0000 https://cyberscoop.com/?p=74380 State intelligence division joins a chorus of Washington officials pushing to renew a controversial surveillance tool.

The post Section 702 data led to State Department warnings about North Korean IT scams, official says appeared first on CyberScoop.

]]>
A controversial surveillance authority played a vital role in State Department’s ability to learn about and warn international partners and U.S. businesses about North Korea’s efforts to commit digital fraud to fund its nuclear program, a senior state department official said Tuesday.

The revelation about the 2022 scheme comes as the State Department joins the intelligence community, the Justice Department, and the White House in pushing for Congress to renew Section 702 of the Foreign Intelligence Surveillance Act before its sunset at the end of this year.

While other officials have focused primarily on the surveillance tool’s importance in combatting nation-state threats, Brett Holmgren, the State Department’s assistant secretary for its Bureau of Intelligence and Research, emphasized how crucial the tool is to diplomatic efforts.

“From Russia, China, Iran and North Korea to foreign influence and cyber threats, 702 reporting provides our analysts with unique insights that when combined with other sources of information, make our policymakers better informed about the issues so they can make better decisions,” he said at a Center for Strategic and International Studies event in Washington on Tuesday.

Section 702 of the FISA Act allows intelligence agencies to collect domestic communications of non-U.S. citizens. While the authority does not allow for the collection of Americans’ data, that information is often swept up in searches and can then later be queried under certain conditions.

In addition to North Korean IT fraud, Holmgren pointed to a variety of State Department issues including human rights work. He noted that Section 702 intelligence directly enabled diplomats to act against a Middle Eastern state in 2021 that was surveilling and tracking dissidents abroad. More broadly, the information has enabled the department to monitor “Russian atrocities” in Ukraine and share intelligence with allies supporting Ukraine.

The State Department intelligence and research division’s use of Section 702 data reflects shifting priorities in the intelligence community in recent years from counterterrorism to taking on a litany of nation-state threats, including cyberattacks. In its 2023 annual threats assessment, the Office of the Director of National Intelligence put China, Russia, North Korea and Iran and their cyber capabilities as some of the leading threats to the nation.

Section 702 has over the years also proved a diplomatic challenge for the U.S., including as part of European Union concerns about American surveillance that led to the invalidation of the last transatlantic data-sharing agreement. The U.S. and European Union are expected to reach a new agreement this summer.

Holmgren noted that, like other agencies, the Research and Intelligence Bureau is in the process of implementing privacy controls required by the White House’s signals intelligence executive order last year as part of the forthcoming EU and U.S. data transfer agreement. The executive order also establishes a not yet enacted redress mechanism for EU citizens who believe their data has been collected in a way that violates U.S. law.

“I think it’s a model for the future in the age of ubiquitous communications and in a rapidly evolving communications technology environment for other nations to follow in terms of how do you conduct legitimate intelligence activities, again, consistent with these transparency and privacy imperatives,” said Holmgren.

Holmgren acknowledged concerns from the civil liberties community about the abuse of Section 702 data and noted that State’s intelligence division has a detailed compliance process for requests to unmask the identity of Americans whose information is included in data shared with the agency.

The Biden administration began its push to renew the authority in February but has so far been met with resistance from lawmakers from both parties who have called for serious reforms or in some cases scrapping the program altogether. The FBI’s claims that internal reforms have cut down on inappropriate searches of Section 702 data have been dwarfed by outrage over recently declassified reports of repeated abuse.

Holmgren didn’t directly address reforms but said that anything that diminishes the amount of intelligence received by the agency “would have a negative impact on the work of our analysts to produce their assessments and the support that we provide to our diplomats.”

So far, no legislation to reauthorize or reform the authority has been introduced in Congress.

The post Section 702 data led to State Department warnings about North Korean IT scams, official says appeared first on CyberScoop.

]]>
Broad coalition of advocacy groups urges Slack to protect users’ messages from eavesdropping https://cyberscoop.com/advocacy-groups-slack-encryption/ Wed, 24 May 2023 12:17:26 +0000 https://cyberscoop.com/?p=74257 Tech, civil liberties and reproductive justice groups want the company to offer end-to-end encryption so users' messages remain private.

The post Broad coalition of advocacy groups urges Slack to protect users’ messages from eavesdropping appeared first on CyberScoop.

]]>
A broad coalition of technology, civil liberties, reproductive justice and privacy advocacy groups are urging the global workplace collaboration platform Slack to offer end-to-end encryption so that its users’ messages can’t be read by government officials or eavesdropping bosses.

“Right now, Slack is falling short in terms of the most basic guardrails for platform safety and privacy,” a group of 93 organizations wrote in the letter. “At this political moment, this can mean life or death for some people online. We call on Slack to go beyond statements and put into action its commitment to human rights by implementing basic safety and privacy design features immediately.”

Concerns about the security of private messages have come into greater focus in recent years due to a number of factors, including the rise of government use of spyware on activists and dissidents as well as the increased risks posed to reproductive rights after the U.S. Supreme Court overturned the right to abortion last summer. While there are no reported instances of Slack messages being weaponized in these cases, the trove of communications the platform collects from clients ranging from government agencies to activists has made users’ communications a target of both lawsuits and hackers.

The letter from groups such as the Mozilla Foundation and the Tor Project is the latest step in a campaign led by the digital rights advocacy group Fight for the Future that urges messaging companies to adopt encryption. Fight for the Future launched its campaign last year in response to the Supreme Court’s Dobbs decision that ended the constitutional right to abortion, a ruling that led to concerns that abortion seekers’ unsecured communications could be used against them in criminal prosecutions.

In the aftermath of Dobbs, companies such as Meta doubled down on existing encryption efforts. However, Fight for the Future Campaign director Caitlin Seeley George said that Slack, which was named alongside other companies such as Meta, Twitter and Google in the “Make DMs Safe” campaign, hasn’t been responsive to the group’s requests.

The concerns raised by the Fight for the Future campaign aren’t abstract. In the past year, there have been several high-profile cases in which law enforcement used private messages turned over by tech companies to investigate illegal abortion.

“We’re moving to a point where the expectation that communication platforms have end-to-end encryption is becoming the new norm,” said Seeley George. “I think people broadly are a lot more aware and cautious about how they’re communicating with people in part because, unfortunately, we’ve seen cases pop up already where the consequences of not having secure messaging have become really clear.”

Slack has more than 10 million daily users around the globe and is used by a range of entities including government agencies, political campaigns and Fortune 500 companies. The platform does encrypt data in transit. However, user messages are not protected using end-to-end encryption, meaning that workspace administrators or Slack are free to snoop on conversations. Without end-to-end encryption, that data could also be accessed by law enforcement that requests it.

Slack said in a blog post that its policy is to “carefully review all requests for legal sufficiency and with an eye toward user privacy.” According to its last available transparency report, Slack received 31 law enforcement requests between January 1 to December 31, 2021. Five of those requests involved content data.

Ranking Digital Rights, one of the groups that signed the letter, observed that Slack was in the minority when it came to the practices of most global messaging services and instead aligns more closely with Chinese messaging platforms.

The letter to Slack comes amid growing pressure on encrypted messaging services from lawmakers in both the U.S. and abroad. WIRED reported Monday that a leaked European Council document found that the majority of EU countries represented in the document supported some form of scanning encrypted messages with Spain taking the more extreme position of advocating for a full ban of the technology.

In addition to end-to-end encryption, the groups behind the letter are urging Slack to adopt anti-harassment tools such as blocking and reporting features. In the past, the company has said that such a feature doesn’t make sense for a workplace tool. Critics say that the messaging platform is used by a broad array of groups and that workplace harassment on Slack is a well-documented issue that got even worse during the rise of remote work.

Caroline Sinders, a researcher who has been pushing Slack to introduce a block feature since 2019, says that anti-harassment and encryption features are the “seatbelts of online safety.” “We need to shift our thoughts away from thinking of these solely as additional features, but as necessary and required functionality to create and maintain a healthier web,” she said in a statement.

Slack responded to a request for comment from CyberScoop by reiterating its user privacy policies.

“Slack is a workplace communication tool and we take the privacy and confidentiality of our customer’s data very seriously,” a spokesperson wrote in an email. “Our policies, practices, and default settings are aligned with business uses of our product.”

Seeley George said that it’s important to push companies that have come out as pro-choice to follow through with that commitment when it comes to user security. “We can’t and won’t let companies like Slack hide behind good PR moments,” she said. “We really need to push them to go further and really consider safety more holistically.”

Updated May 24, 2023: To include a comment from Slack.

The post Broad coalition of advocacy groups urges Slack to protect users’ messages from eavesdropping appeared first on CyberScoop.

]]>
What the record-breaking $1.3 billion Meta fine means for the US-EU clash over spying programs https://cyberscoop.com/eu-fine-meta-privacy-global-internet/ Mon, 22 May 2023 20:28:42 +0000 https://cyberscoop.com/?p=74223 The order speaks to a larger transatlantic rift over an American surveillance program that gathers data on European citizens.

The post What the record-breaking $1.3 billion Meta fine means for the US-EU clash over spying programs appeared first on CyberScoop.

]]>
European regulators hit Meta with a $1.3 billion fine on Monday, the largest ever brought under the European Union’s General Data Protection Regulation law and comes alongside orders that Meta must stop future transatlantic data transfers to the U.S. within five months.

The order has the potential to radically reshape Facebook’s business in Europe and throws into question the future of billions of data transfers made daily between U.S. companies and European counterparts, potentially leading the global tech industry toward a regime of data localization that closes off global trade and raises new security concerns.

At the center of how the penalties will ripple through the industry in the coming months is a pending agreement between EU and American officials on a new agreement regulating the transatlantic exchange of personal data for commercial purposes. Failure to reach an agreement could force Meta and other companies into ceasing data transfers with the European Union, requiring them to cordon off EU users from the rest of the world or potentially pulling services altogether.

“It puts tremendous pressure on the U.S. government and the European Council to move forward as quickly as possible,” said Caitlin Fennessy, vice president at the International Association of Privacy Professionals and former director of the U.S. Privacy Shield, the previous data transfer agreement between the EU and the United States that was invalidated in 2020.

“The impact of stopping transfers hits both EU and US companies and their economies,” she said, noting that 94% of IAPP’s members use the same form of the data transfer agreement as Meta.

The European Union and the U.S. first reached a privacy agreement in 2016 as a means of deeming the protection of the transfer of EU data adequate under GDPR. A 2020 Court of Justice of the European Union decision about a case challenging Facebook’s transfer of EU data invalidated the agreement.

Ireland’s Data Protection Commission issued Monday’s penalties against Meta after finding the company failed to comply with a 2020 EU court decision that invalidated a data transfer agreement between the U.S. and the European Union. The court case,

In March 2022, the U.S. and EU announced they had reached an agreement on a framework for a new deal but both parties are still ironing out details. Officials have said publicly they expect to finalize the agreement this summer, likely before the six-month deadline for Meta to come into compliance with the order.

Meta said in a statement that there would be no immediate disruption to Facebook and that it planned to appeal both the decision and the fine, seeking an immediate stay on deadlines for changes.

“Ultimately, the invalidation of Privacy Shield in 2020 was caused by a fundamental conflict of law between the US government’s rules on access to data and the privacy rights of Europeans,” Nick Clegg, Meta president of global affairs, and Jennifer Newstead, Meta’s chief legal officer wrote in a blog on Monday. “It is a conflict that neither Meta nor any other business could resolve on its own. We are therefore disappointed to have been singled out when using the same legal mechanism as thousands of other companies looking to provide services in Europe.”

The ripple effects of the decision are likely to extend to many other tech companies with users in Europe. “This issue goes far beyond Meta; the time has come for the United States and the European Union to operationalize this agreement quickly, returning certainty to data flows that underpin transatlantic economic ties, society, and our international cooperation,” Sean Heather, senior vice president for International Regulatory Affairs and Antitrust at the U.S. Chamber of Commerce, wrote in a statement.

Another trade group, Computer & Communications Industry Association, warned that the decision “effectively makes the way the internet works illegal, from video conferencing and browsing the internet, to the processing of online payments.”

The order, while adding to Meta’s heavy tab of global fines for violating user privacy, also speaks to a larger conflict between the U.S. and the European Union over spying on European user data through digital surveillance programs. Specifically, in its decision to invalidate the last privacy shield, the CJEU expressed concerns about Section 702 of the Foreign Intelligence Surveillance Act, which allows for the warrantless searches of foreign persons, as well as Executive Order 12333, another foreign intelligence gathering authority.

The fine shows that U.S. surveillance programs “have real-world consequences for businesses. Data transfers from the EU are critical to thousands of businesses in the U.S., including small and medium-sized ones,” said Ashley Gorski, a staff attorney at the American Civil Liberties Union. “The DPC’s decision puts the legality of all those transfers in doubt.”

“This decision is about the NSA and U.S. law, not about Facebook’s practices,” Georgetown Law professor Anupam Chander tweeted on Monday.

The order by Ireland’s Data Protection Commission specifically points to concerns about how U.S. surveillance programs ensnare the data of European citizens. Regulators note that those concerns spread beyond Meta.

The regulators concluded that while Monday’s decision only applied to Meta Ireland it “exposes a situation whereby any internet platform falling within the definition of an electronic communications service provider subject to the FISA 702 PRISM programme may equally fall foul of the requirements of Chapter V GDPR and the EU Charter of Fundamental Rights regarding their transfers of personal data to the USA.”

A new privacy framework would only provide a “bandaid” for those risks, Eduardo Ustaran, global co-head of the Hogan Lovells Privacy and Cybersecurity practice, during an online webinar Monday. He said that so long as companies are subject to U.S. surveillance programs, there’s little they can do to remedy the concern expressed by regulators.

“This is a test in a way that a company tried to do everything that was possible to do in terms of legal measures, organizational measures, technical measures — and despite all that still didn’t eliminate the risk that led to the enforcement action,” he said.

Last fall, the Biden administration created by executive order a new redress system for EU citizens who believe that U.S. intelligence collected their personal data in ways that violate American laws. However, DPC regulators found that since the review process is not yet in effect it didn’t remedy concerns in the complaint.

Moreover, there are questions if the surveillance reforms undertaken by the U.S. would fully satisfy concerns raised by the EU courts and privacy advocates, who have already expressed doubts about the surveillance reforms.

One of the programs that the Court of Justice of the European Union has raised significant concerns about, Section 702, is currently up for reauthorization. While reforms to the program are unlikely to occur before the European Commission makes its decision about the adequacy of a new data transfer agreement, the threat to U.S. businesses raised by Monday’s agreement could add to the urgency for reform.

“It was already clear that Congress needs to enact meaningful surveillance reform this year,” said Gorski. “The DPC’s decision makes that even clearer.”

Fennessy from IAPP said that the decision will force companies operating in the European Union to reevaluate their operations. “I think there is a big open question to the near-term effects this decision will have on their risk calculus,” she said. “There is now an immediate 1.2 billion price tag to the data transfers that happened and are happening now.”

The post What the record-breaking $1.3 billion Meta fine means for the US-EU clash over spying programs appeared first on CyberScoop.

]]>
FTC says popular fertility app gave advertisers pregnancy data without permission https://cyberscoop.com/ftc-fertility-app-pregnancy-data/ Wed, 17 May 2023 20:05:44 +0000 https://cyberscoop.com/?p=74116 An FTC order bars app maker Easy Healthcare from sharing additional personal health data with third parties for advertising.

The post FTC says popular fertility app gave advertisers pregnancy data without permission appeared first on CyberScoop.

]]>
Makers of the popular fertility tracking app Premom repeatedly deceived users by sharing sensitive information that included health data to third parties without users’ permission, a new Federal Trade Commission complaint alleges.

The agency’s investigation found that Easy Healthcare, which developed the app, violated its direct promises to users by improperly disclosing sensitive data indicating sexual and reproductive health information, including pregnancy status, to the marketing firm AppsFlyer and Google. As far back as 2018, the third parties received data on “Custom App Events” with labels that conveyed sensitive health information, according to a Justice Department complaint. For instance, the third party could see the event “Log period-save” when a user logged information about their period. The FTC alleges that the disclosures repeatedly violated the company’s promises to users that it would not share any identifiable or health data.

Between 2018 and 2020 the developers also shared sensitive data such as precise geolocation data tied to a non-resettable mobile device identifier with two Chinese-advertising firms without user permission, according to the complaint. The findings were the focus of a joint investigation by the attorneys general of Washington, D.C, Oregon, and Connecticut, which coordinated with the FTC.

The complaint comes as both state attorneys general and the FTC have ramped up warnings against firms sharing sensitive reproductive health information in the wake of the Dobbs decision last spring reversing the constitutional right to abortion. State attorneys general have issued warnings to consumers against sharing sensitive reproductive health information that could be used against them in criminal investigations.

The complaint about sharing sexual and reproductive health data goes beyond initial concerns raised about the company in 2020 after The Washington Post reported findings that the company’s Android app collected user device data and shared it with three Chinese advertising companies without user permission. Premom said that it stopped data sharing, first detected by researchers at the International Digital Accountability Council, after the Washington Post contacted the company and Google Play, which temporarily removed the app for violating its policies. Members of Congress at the time called on the FTC to investigate the privacy concerns.

Both researchers and the FTC investigation concluded that Premom failed to adequately encrypt data it shared with third parties, including the Chinese advertisers, leaving it susceptible to interception.

According to the Google Play store, the Premom app has been downloaded more than 1 million times. The app’s privacy policy notes that it may disclose personal data “at the request of law enforcement or government agencies, in response to subpoenas, court orders, or other legal processes, or as otherwise required by any law, rule, or regulation to which we are subject.”

As a part of the proposed settlement filed by the Justice Department on Wednesday, Easy Healthcare has agreed to pay a $100,000 civil penalty for violating the FTC’s Health Breach Notification Rule. As part of the order, Easy Healthcare will agree to refrain from sharing personal health data with third parties for advertising. Easy Healthcare has agreed to implement new security and privacy programs and provide regular privacy and security audits to the agencies.

The FTC investigation was launched in coordination with the attorneys general of Washington, D.C., Connecticut and Oregon. Easy Healthcare will also pay a total of $100,000 to the states.

“District residents who used the Premom app were entitled to have their locations and devices kept confidential, but Easy Healthcare shared that private information with third parties without notice or consent, putting users at risk,” said D.C. Attorney General Brian Schwalb. “Now more than ever, with reproductive rights under attack across the country, it is essential that the privacy of healthcare decisions is vigorously protected.”

Premom said in a statement that its settlements are “not an admission of any wrongdoing.”

“Protecting users’ data is a high priority, which is why we have always been transparent with and cooperated fully throughout the FTC’s review of our privacy program,” Premom said in a statement provided to CyberScoop.

This is the second time the agency has brought an enforcement action against a company for violating the Health Breach Notification Rule. Earlier this year, it reached a settlement with telehealth and prescription drug discount company GoodRx for failing to disclose to users that it shared personally identifiable health information to Facebook, Google and other third parties. The agency is expected to issue a notice of proposed rulemaking to amend the Health Breach Notification Rule at a meeting Thursday.

Updated May 18, 2023: To include a statement from Premom.

The post FTC says popular fertility app gave advertisers pregnancy data without permission appeared first on CyberScoop.

]]>