Commentary Archives | CyberScoop https://cyberscoop.com/news/commentary/ Fri, 23 Jun 2023 18:45:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://cyberscoop.com/wp-content/uploads/sites/3/2023/01/cropped-cs_favicon-2.png?w=32 Commentary Archives | CyberScoop https://cyberscoop.com/news/commentary/ 32 32 How to AI-proof the cybersecurity workforce https://cyberscoop.com/ai-proof-cybersecurity-workforce/ Fri, 23 Jun 2023 18:45:48 +0000 https://cyberscoop.com/?p=75056 Generative AI can enhance digital security, but it can’t — and shouldn’t — replace humans that are essential to fight malicious hackers.

The post How to AI-proof the cybersecurity workforce appeared first on CyberScoop.

]]>
Automation is hardly new. Ever since the Industrial Revolution, jobs have been transformed, created and eliminated because of it. Now, automation in the form of artificial intelligence is coming for the tech sector — and specifically cybersecurity.

The excitement over AI in cybersecurity was on full display at the annual gathering of infosec professionals in San Francisco known as the RSA Conference. At this year’s event, multiple keynotes focused on the potential for AI to efficiently hunt for digital risks and automate threat response protocols. AI also promises to alleviate the stresses associated with many cybersecurity jobs, such as first responders. But just as there’s potential, there are downsides. As AI tools inevitably begin to scale and tackle more complex cybersecurity problems, the impact on the workforce is troublesome — and dangerous. 

We cannot let the potential of AI overshadow the value of human cybersecurity professionals. While AI excels at pattern recognition tasks such as detecting malware attacks, machines cannot take into account the context for why an attack may be happening. AI can be amazing at automating some aspects of reasoning, but algorithms cannot replace people when it comes to the creativity required to find unique solutions. Chatbots can’t replicate all the human competencies that are crucial within cybersecurity. So, without a measured — and cautious — approach to AI, our sector risks moving toward insecurity.

While it’s reassuring to see a growing conversation about the potential dangers of AI and efforts to put in place some common sense guardrails to regulate its deployment, such as President Biden’s meeting this week with Big Tech critics in San Francisco, there’s still not enough focus on the potentially devastating impact that AI tools could have on the American workforce.

Goldman Sachs estimates that in the U.S. and Europe, about one-fourth of current work can be substituted by generative AI. It’s unlikely that entire job functions will be eliminated, but less people will be needed to maintain a baseline level of work. Moreover, there is research that posits that high-skilled jobs may be impacted more because AI’s predictive capabilities mimic the analytical and optimization skills core to many higher skilled jobs. Within cybersecurity, these can include individuals across a number of functions such as the SOC analysts to efficiently aggregate suspicious activity data, or red teamers who code and test for vulnerabilities.

What needs more attention beyond the job numbers are the economic impacts on the cybersecurity workforce. Empirical evidence examining wage changes and automation between 1980 and 2016 suggests that about 50% of wage changes are due to task displacement and actually exacerbate wage inequality. The study is not sector-specific, but if leading cybersecurity firms are touting AI’s potential to efficiently conduct tasks such as automated threat detection protocols, then cybersecurity will not be insulated from these changes. 

We also need to consider the impacts on diversity. There have been commendable efforts the past several years to lower barriers to entry into cybersecurity, including scholarship programs that cut the cost of entering the field, and professional associations such as Black Girls Hack or Women in Cybersecurity that help foster belongingness and retention in the sector. The National Cybersecurity Strategy further underscores how central diversity is for the workforce to long-term cybersecurity. But we are at a critical crossroads as layoffs across sectors, especially in tech, are cutting diversity, equity and inclusion efforts. If history suggests that job displacement by automation is on the horizon, AI can further slow our hard-earned progress. 

It’s imperative that investors and advocates of the cyber workforce consider the potential ramifications of AI, including on its least represented members. Luckily the U.S. has a growing ecosystem of cyber workforce development programs designed to usher individuals into the cybersecurity sector that we can reframe workforce priorities rather than inventing a new wheel.

But more needs to be done to make cybersecurity workers AI-proof. For starters, many of the new cyber educational efforts can focus on soft skills that cannot be automated. Generative AI can automate many tasks, but skills such as creativity, emotional intelligence and intuition are hard to replace. Whether in designing training curriculum or hiring practices, emphasize these skills to ensure your cybersecurity staff can solve tough problems, but also have the capabilities to complement the challenges and potentials of AI. 

Several large tech companies have professional development tracks that upskill their staff and other associations provide additional training and certifications at a premium but there’s opportunities for other nonprofits to expand their programming to include AI. Nonprofit organizations who have a stellar track record for technical training have an opportunity to step in and build equitable pathways for cybersecurity workers to continue their technical careers, and there is space for philanthropies and corporations to invest in developing these programs. 

We also need to rethink what it means to have a “cybersecurity career.”Cybersecurity extends beyond patching vulnerabilities and detecting threats. Policy analysts now contextualize strings of cyberattacks within a wider geopolitical conflict. Developers contribute their lived experiences to designing tech solutions to society’s pressing challenges. While extending our definition of a cybersecurity expert, we need to ensure these professionals are communicating. Programs such as the #ShareTheMicInCyber Fellowship or TechCongress focus on bridging the gap between technical experts in cybersecurity and technology to inform better policymaking.

There is no doubt that generative AI will have a transformative impact. We have the opportunity to prepare the cyber workforce for a future just as promising, and we need to start now.

Bridget Chan is the program manager at New America for the #ShareTheMicInCyber Fellowship, a program advancing DEI in cybersecurity.

The post How to AI-proof the cybersecurity workforce appeared first on CyberScoop.

]]>
How university cybersecurity clinics can help cities fight ransomware https://cyberscoop.com/cybersecurity-clinics-ransomware/ Fri, 02 Jun 2023 13:47:54 +0000 https://cyberscoop.com/?p=74490 Cybersecurity faculty and students can be a valuable resource to help local governments and business build cyber capacity.

The post How university cybersecurity clinics can help cities fight ransomware appeared first on CyberScoop.

]]>
When the Royal ransomware group struck computer systems in Dallas earlier this month, the attack disrupted public safety systems, 311 services, municipal courts, and other city departments and services. The attack forced courts to close. Police struggled to access internal share drives. The city library system’s database and catalogue went down. And city officials estimate it will take months to recover.

Ransomware groups are increasingly targeting U.S. municipalities, and the difficulties Dallas officials face in getting back up and running illustrate just how vulnerable U.S. cities are to ransomware attacks. The fact that a relatively well-resourced city like Dallas is struggling to recover from a ransomware attack hints at the far greater difficulties smaller municipalities face when their IT systems come under attack.

In the aftermath of ransomware attacks, cities frequently turn to the federal government for assistance, but such aid is mostly reactive. It would be better if cities were positioned to prevent these breaches in the first place. But all too often municipalities lack the resources and human capital to defend themselves.

Today, university-led cybersecurity clinic programs are trying to fill this gap by building local cyber capacity. At institutions like the University of Texas at Austin, MIT, the University of Georgia and UC Berkeley, cyber clinics are working to protect local institutions from cyber threats by training and deploying students to government and community groups to provide free cyber risk assessments and give simple, step-by-step recommendations. In some clinics, students are designing and implementing custom cybersecurity solutions to bolster client defenses and guide future incident response.

Clinics such as these are well-positioned to help local institutions better protect themselves online. As Sarah Powazek of and Marc Rogers recently wrote for CyberScoop, universities are typically deeply embedded in their local communities and have the trusting relationships required to assist critical city departments with onsite cyber resources. The university clinic model has existed in medical and law schools for decades to train the next generation of leaders in these fields with hands-on, real-world experience. Extending the clinic model to cybersecurity gives students experience while offering municipalities access to valuable expertise. Town-gown clinic partnerships like these advance university goals, provide necessary public services back to their host cities and help to fill a nationwide cybersecurity workforce gap.

The Applied Cybersecurity Community Clinic at The University of Texas at Austin launched this year as one such partnership. The fruit of discussions with the Cybersecurity and Infrastructure Security Agency’s Cybersecurity Advisory Board, the city of Austin and UT Austin’s Robert Strauss Center for International Security and Law, the clinic provides pro bono cybersecurity services to community organizations and small businesses that cannot afford such services on their own, while giving students hands-on cybersecurity experience.

Given Austin’s burgeoning tech ecosystem and staggering urban growth, the city provides a perfect testbed for delivering cybersecurity services via a university clinic. Austin is home to a large number of disruptive tech start-ups, many of which are more focused on growth than cybersecurity and in need of the clinic’s services. And as the city grows, Austin’s nonprofits and city services are in need of robust digital services as they support underserved Austinites who have been adversely impacted by cost-of-living increases. Between these sectors, the UT Austin cybersecurity clinic’s inaugural student cohort will deploy to serve a mix of small business, nonprofit and public sector clients in the 2023-2024 school year.

Due to the transitory nature of college students and the legal risks involved in incident mitigation, university clinics are not especially situated to provide boots-on-the-ground incident response services. But by serving as force multipliers, university cybersecurity clinics help to accomplish cyber defense goals across local, state, and federal governments. Clinics alleviate requests for state and federal resources by emphasizing a hyper-local preventative approach to cybersecurity. By tracking students into the cybersecurity workforce, clinics may ease the shortage of cybersecurity expertise by providing a talent pipeline and internship-like experiences to bridge existing gaps.

The cybersecurity clinic network is growing, and clinics represent a sustainable, scalable and long-term presence in the areas they serve. As we seek to grow the cyber workforce, clinics serve as a valuable resource to leverage the expertise of university students and faculty to address the immediate needs of communities with their unique forms of cyber mutual aid. As the workforce catches up and more skilled professionals enter the field, clinics can evolve and adapt their services, offering advanced cybersecurity solutions, specialized consulting expertise and research collaboration. In the future, clinics working together could standardize research and reporting on cyber incidents that affect their clients to better inform the defense of U.S. computer systems. The sustained presence of cyber clinics will be essential in supporting the ever-changing cybersecurity landscape and ensuring small, local organizations have resources to combat emerging threats.

Incorporating university-led cybersecurity clinic programs into local cyber planning and prevention offers a proactive and free third-party solution to ransomware attacks on under-resourced U.S. cities. Municipalities in areas with active clinics should seek clinic assistance to foster local cyber resilience and reduce reliance on reactive state and federal intervention. Municipalities interested in more information about cyber clinics should consult the Consortium of Cybersecurity Clinics for resources and contact information.

Francesca Lockhart leads the Applied Cybersecurity Community Clinic at The University of Texas at Austin.

The post How university cybersecurity clinics can help cities fight ransomware appeared first on CyberScoop.

]]>
Rethinking democracy for the age of AI https://cyberscoop.com/rethinking-democracy-ai/ Wed, 10 May 2023 17:23:17 +0000 https://cyberscoop.com/?p=73922 We need to recreate our system of governance for an era in which transformative technologies pose catastrophic risks as well as great promise.

The post Rethinking democracy for the age of AI appeared first on CyberScoop.

]]>
There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking … at every scale. From the individual all the way up to the whole of society.

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. 

For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion.

And finally, this is a work in progress. I’m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you’re going to hear is the current draft of my writing — and my thinking. So everything is subject to change without notice.

OK, so let’s go.

We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And technologies for honing persuasive messages.

The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don’t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.

Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That’s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.

James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology — social media — breaks them both.

So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.

That’s the real issue. It’s not about misinformation, it’s about the incentive structure that makes misinformation a viable strategy.

This is problem No. 1: Our systems have misaligned incentives. What’s best for the small group often doesn’t match what’s best for the whole. And this is true across all sorts of individuals and group sizes.

Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.

But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that’s how you impose your own incentive structure on the whole.

More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that’s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.

We have accepted these costs of capitalism — and democracy — because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today’s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.

Democracy is a socio-technical system. And all socio-technical systems can be hacked.

Photo by Scott Eisen/Getty Images

Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product’s success depends on my competitors’ failures. We think zero-sum internationally. Arms races and trade wars.

Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone’s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems — complex, wicked, global problems — that don’t work that way. We have interacting groups of problems that don’t work that way. We have problems that require more efficient ways of finding optimal solutions.

Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.

The latter is problem No. 2: What I refer to as “hacking” in my latest book: “A Hacker’s Mind.” Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated – they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.

In today’s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don’t get patched.

This is bad for society, but it’s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn’t a new problem, but today hacking scales better – and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They’re designed to be reactive, because traditionally there was only so much damage a single person could do.

We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.

The general term for this kind of thing is the “precautionary principle.” When random new things can be deadly, we prohibit them unless they are specifically allowed.

Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies.

So what happens when a significant percentage of our jobs are as potentially damaging as a pilot’s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.

And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating “micro-legislation” that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.

So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.

Or, to put more words on it: Misaligned incentives encourage local optimization, and that’s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.

OK, let’s get back to the notion of democracy as an information system. It’s not just democracy: Any form of governance is an information system. It’s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.

Historically, there are many ways to do this. We can have a system where no one’s preference matters except the monarch’s or the nobles’ or the landowners’. Sometimes the stronger army gets to decide — or the people with the money.

Or we could tally up everyone’s preferences and do the thing that at least half of the people want. That’s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins — and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.

I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I’m focusing on how we implement it.

Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it’s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.

To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.

But democracy is not a static system, it’s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.

Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone’s preferences.

Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.

Why can’t we have a game where everybody wins?

This has never been done before. It’s not capitalism, it’s not communism, it’s not socialism. It’s not current democracies or autocracies. It would be unlike anything we’ve ever seen.

Some of this comes down to how trust and cooperation work. When I wrote “Liars and Outliers” in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems.

Photo by Olivier Douliery/AFP via Getty Images

What I didn’t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They’re person to person, based on human connection and cooperation. Laws – and especially security technologies – are newer systems of trust that force us to cooperate. They’re socio-technical systems. They’re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country’s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don’t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.

In today’s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It’s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.

But here’s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.

Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that’s a system that scales to large groups of strangers.

Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It’s how families function: people work according to their abilities, and resources are distributed according to their needs.

I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don’t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.

This runs headlong into historical norms about sovereignty. But that’s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the “drug war” — or climate migration—are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.

We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it’s OK that we waste some of the pie in order for it to grow. That doesn’t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we’re in right now.

Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.

In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn’t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.

Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.

This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.” This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking — and also red teaming. In this context, both journalism and civil society are important checks on government.

I want to quickly mention two ideas for democracy, one old and one new. I’m not advocating for either. I’m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.

Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they’re not immune to hacking, we can design them from the beginning with security in mind.

Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections.

Photo by Scott Eisen/Getty Images

This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.

We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.

Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?

Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren’t swayed by misinformation. It’s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.

But slow down: This is rapidly devolving into technological solutionism. And we know that doesn’t work.

A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it’s easy. I’m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I’m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can’t explain why, but strongly urges us to declare war — right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.

And learned helplessness is an important consideration. We’re probably OK with no longer needing to know how to drive a car. But we don’t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.

So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.

The systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes.

Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in—and in charge of—the process of governance.

And that value is critical to making democracy function. Democratic knowledge isn’t something that’s out there to be gathered: It’s dynamic; it gets produced through the social processes of democracy. The term of art is “preference formation.” We’re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.

We’re never going to remove conflict and competition from our political and economic systems. Human disagreement isn’t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it’s the process of making all of this work that makes democracy possible.

So we actually can’t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

There’s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don’t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven’t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven’t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin’s trilemma also matters here: that you can’t simultaneously build systems that are secure, distributed, and scalable.

I also haven’t given a moment’s thought to how to get from here to there. Everything I’ve talked about — incentives, hacking, power, complexity — also applies to any transition systems. But I think we need to have unconstrained discussions about what we’re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they’re not far-off science fiction.

I know we can’t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.

To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes. At the same time we’re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today’s problems and today’s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.

This feels like a challenge worthy of our security expertise.

Bruce Schneier is the author of “A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend them Back,” a Lecturer in public policy at the Harvard Kennedy School and chief of security architecture at Inrupt, Inc.

The post Rethinking democracy for the age of AI appeared first on CyberScoop.

]]>
The key to making the US cyber strategy work: boots on the ground https://cyberscoop.com/us-cyber-strategy-local-cybersecurity-volunteers/ Thu, 04 May 2023 19:24:00 +0000 https://cyberscoop.com/?p=73815 Prioritizing work with academic institutions, localities and skilled volunteers is the best way of advancing America's cybersecurity needs.

The post The key to making the US cyber strategy work: boots on the ground appeared first on CyberScoop.

]]>
We have seen more federal resources, action and coordinated strategies around improving nationwide cybersecurity in the past four years than in the last 40 combined. The FBI and the Department of Justice are prosecuting cybercriminals, disrupting criminal networks and seizing stolen funds. The Cybersecurity and Infrastructure Security Agency, handed the mighty mission to defend and secure cyberspace just four years ago, is set to receive more than $3 billion in funding for 2024. 

Most recently, the Office of the National Cyber Director, a two-year-old office leading the Biden Administration’s cyber agenda, released the much-anticipated National Cybersecurity Strategy, an astonishing document with a vision of securing “the full benefits of a safe and secure digital ecosystem for all Americans.” This strategy, built on the recent cyber executive orders out of the Biden administration, is an ambitious move by the White House to stay ahead of the curve on cyberdefense, seeking to both prevent cybercrime and actively disrupt criminal operations. 

The strategy could not have come at a better time. Despite years of growing political willpower and resources at the federal level, critical local organizations are still regularly getting hit with common cyberattacks such as ransomware. Municipalities, food banks, hospitals, school districts and other local organizations are at risk of becoming incapacitated within minutes of a ransomware attack, affecting the critical services that entire populations rely on. Tribal territories are often included in the umbrella of the “SLTT” acronym but rarely receive similar resources and attention as their municipal counterparts and are also at risk for debilitating cyberattacks.

At the community level, the impacts of ransomware are often felt immediately and can last from weeks to months after the attack. In one of the worst incidents of recent memory, the Vermont Medical Center was forced to delay cancer treatments after losing access to electronic health records of chemotherapy medications. Attacks such as ransomware are challenging to recover from, and delaying critical services creates long-term consequences for an entire community.

And while U.S. agencies have been allocating more resources toward countering cybercrime in recent years, these funds and advice often do not reach smaller, local organizations, which are still left to their own defenses.

Ultimately, federal intervention will never be enough, on its own, to address all cybercrime in every locality. It will take a whole-of-nation effort to protect local communities from cyberattacks, and local organizations can provide critical “boots on the ground” services and support directly to the organizations at risk of cybercrime. And in the current challenging economic climate, it will take out-of-the-box thinking by nontraditional groups to provide these resources to those in need. 

There are three major groups that we’ve seen move the needle for local cybersecurity efforts in innovative ways: academia, local government, and volunteer cyber experts. Though each program serves a small population with specific resources, together they paint a picture of collaborative cyberdefense. 

Academic institutions

Academics and educational organizations are uniquely positioned to serve as hubs of cyber defense; they train the next generation of cybersecurity professionals and are preoccupied with cybersecurity as frequent targets of cybercrime. Higher-ed institutions have created dual training and service programs called Cybersecurity Clinics, in which students learn core cyber skills and provide free cyber assessments to local organizations in need.

Many higher-ed institutions like the University of Georgia, MIT, The University of Texas at San Antonio, UC Berkeley and the University of Alabama are running Cybersecurity Clinic programs, training students to perform cybersecurity maturity assessments pro-bono and give recommendations for local cities and nonprofits, much like law school and medical school clinics have for decades. In just a few years, less than a dozen programs have trained more than 730 students and bolstered the defenses of more than 120 organizations

Other programs at schools like Oregon State and Bridgewater State University are training students to perform security operations center incident detection services for vulnerable organizations. 

Higher-ed institutions have deep community partnerships, commitments to public service, and the talent and energy of hundreds of thousands of young people. Academia is a formidable and growing ally in the fight to protect local organizations from cyberattacks.

Regional governments

State and local governments are another group making sizable contributions to community cyberdefense by piloting innovative ways to provide state cyber aid. For example, the County of San Francisco developed the Bay Area UASI to promote resource sharing and cyber mutual aid programs across the county. The State of Massachusetts formed the MassCyberCenter, an innovative department that assists municipalities with cybersecurity. States such as Ohio, Wisconsin, Michigan, North Carolina and Wyoming have created state-led cyber response corps to help local organizations with cyber incident response and recovery.

These regional governments are often discussed only in the context of cyber victimhood, as they have suffered some of the most high-profile attacks in recent years. But regional governments are vital stewards for community cyberdefense; cities, counties, and states can provide cyber mutual aid to surrounding areas, promote federal cyber resources, send on-site assistance quickly, and ultimately act as trusted advisors for struggling organizations. 

Cyber experts as volunteers

The last actors stepping up to protect local organizations from cyberattacks are local volunteers. Nonprofit organizations like the CTI League, I am the Cavalry and other formal and informal groups of cyber professionals harness the insight of industry experts to act as a last line of defense. These groups share threat information and even attempt to notify potential ransomware victims before their data becomes encrypted. 

Some individuals also join state-led programs like the Michigan Cyber Corps, mentioned above, which certifies industry volunteers to step in as incident responders when local organizations suffer a cyberattack. Others offer a few hours of their week pro-bono through programs such as the CyberPeace Builders, which matches experts with bite-sized tasks like helping an organization set up its firewall. 

Cyber volunteers are quickly becoming an indispensable backbone of cyberdefense for organizations that cannot afford long-term professional assistance.

Call to action: making local cyber collaboration a priority 

Local cyberdefense programs share a common goal, despite having roots in different sectors and across the continent; with cyberattacks, no one gets left behind. 

Strengthening local cyber programs will act as a force multiplier for all the progress made at the federal level; local leaders can disseminate advice between trusted local partners, arrive at the scene of an attack faster, and stick around for longer to ensure a ransomware victim has the support they need to recover. Indeed, these programs can be beacons through which resources that often remain concentrated at the federal level can reach those who need them most.

The White House ONCD acknowledges this unlevel playing field in the National Cybersecurity Strategy by stating: “Our collective cyber resilience cannot rely on the constant vigilance of our smallest organizations and individual citizens.” The strategy notably requires buy-in and resources from states. For example, its goal of harmonizing regulation would necessitate working with states to align numerous state breach disclosure and other cybersecurity laws. 

Other proposed projects in the strategy have local impacts, like increasing the speed of victim notification and intelligence sharing, supporting a digital identity ecosystem, and strengthening the cyber workforce. These projects’ success and longevity will depend on engagement with government, academia, and industry members.  

Local organizations like academic institutions, regional governments and groups of volunteers are among those best-positioned to serve their communities’ cyberdefense needs; they have the trust and drive to alleviate burdens for critical local organizations. Only by prioritizing collaboration with local institutions and harmonizing strategies among government agencies can we move the needle on cyberdefense for all.

Sarah Powazek is the program director of public interest cybersecurity at the UC Berkeley Center for Long-Term Cybersecurity.

Marc Rogers is a longtime cybersecurity professional, senior technical adviser at the Institute for Technology and Security, a member of the Ransomware Task Force and co-founder of the CTI League.

The post The key to making the US cyber strategy work: boots on the ground appeared first on CyberScoop.

]]>
How cyber support to Ukraine can build its democratic future https://cyberscoop.com/ukraine-cyber-aid-russia-war/ Tue, 18 Apr 2023 18:52:18 +0000 https://cyberscoop.com/?p=73186 Kyiv sees Ukraine's reconstruction as an opportunity to turn the country into a European tech hub — to do that it needs help.

The post How cyber support to Ukraine can build its democratic future appeared first on CyberScoop.

]]>
In the year since Russia tried and failed to topple Kyiv, Ukraine has repelled the ensuing onslaught of cyber aggression and propaganda more soundly than anyone predicted. The country rapidly migrated many of its critical functions from domestic data centers to the cloud and upgraded hardware and software in key sectors — leading Mykhailo Fedorov, a key adviser to President Zelensky and Ukraine’s minister of digital transformation to describe the country’s IT companies as “heroes as much as our armed forces.”

Ukrainian officials are now betting that their country’s remarkable digital resilience in wartime — achieved with significant help from international partners — can translate into a cornerstone of its post-reconstruction identity and economy. Prior to the war, Ukraine was growing into a favorite software development destination for multinationals and start-ups alike. Russia’s invasion — and the flood of international tech support to counter it — have only supercharged Kyiv’s aspirations of becoming a major player in Europe’s “Silicon Valley.”

Achieving that agenda will largely depend on Kyiv’s ability to prioritize between its immediate wartime needs and its long-term goals, in addition to continued support from Western capitals and tech companies. The Zelensky administration must speak clearly and with one voice to articulate its needs in the short and long term as a technology partner, customer and supplier, possibly on a region-by-region basis. This will help Western capitals and technology firms to consider more strategic investments and more permanent stakes — not only in the outcome of the war, but in the future of the European tech sector.

We recently travelled to Kyiv to understand the sources of Ukraine’s digital resilience and prospects for reconstruction. We were struck by the social cohesion and sense of shared mission evident from our conversations with Ukrainian contacts across government, industry and civil society. It was a stark contrast to the United States, where cyberattacks and disinformation often prompt more panic than empowerment. In wartime Ukraine, the common priority of beating back Russia has fostered what one senior official called “digital solidarity” — a phenomenal coalition of Ukrainians and international partners to defend Ukraine’s digital infrastructure.

Kyiv began building the foundations of this digital solidarity long before the war intensified. The idea wasn’t just to modernize the economy, but to improve governance. In 2019, the Zelensky administration established a new ministry dedicated to digitizing government operations, minimizing notorious amounts of red tape and making public services accessible to citizens anywhere via mobile and web-based applications. An added bonus is that “computers don’t take bribes,” as a senior Ministry of Digital Transformation official told us. In a country seeking to shed the post-Soviet vestiges of corruption and bureaucratic ineptitude, forward-looking Ukrainian officials see each efficient interaction between the state and its citizens as a small victory. If trust in institutions is a sign of a healthy democracy, digitization presents boundless opportunities to cultivate it.

During the invasion, these e-services became a lifeline. Ukraine’s mobile super-app, “Diia” (an acronym for “the State and me”) allowed millions of displaced people to cross borders with a war-time digital ID, apply for social benefits and receive government updates. Other apps empowered citizens to resist rather than panic. eVorog (“eEnemy”) collected citizen reports of Russian troop movements or drones overhead. Ukraine’s Cyber Police fields reports via web app about possible cyber incidents. We felt the unifying force of these apps every time national air raid warnings blared out from every phone in earshot.

These apps give residents of Ukraine a sense that their concerns are heard, registered and responded to in a timely way, setting both realistic expectations and accountability for establishing and restoring services. Kyiv looks to Estonia — a pioneer in e-governance — as a guide for what “right” looks like: accessible and open digital public services, interactive communication and policy development, easier registration and identification management, simplified voting and more transparent contracting. Digital transformation can also make Ukraine a more attractive place to do business and make it better aligned with the European Union, to which it aspires to become a member. Ukrainian economic analysts assess that every 1% increase in the country’s digitization could add 0.42% to its GDP. Major digital upgrades to trade, energy and transportation are envisioned to boost economic vitality within the bloc and strengthen domestic cybersecurity.

However, realizing the promise of digital reconstruction in Ukraine is not inevitable. The coalition behind Ukraine’s resilience is showing signs of burnout and fracture, and the longer-term politics of Ukraine’s digital reconstruction will prove more complex than its digital defense.

During the war, support from Western tech companies provided crucial help for Ukraine’s government to protect its network and continue providing services. But it’s unclear whether this support can be sustained through the murky timeline of reconstruction, as the generosity of Western tech companies is not unlimited. Publicly traded tech firms are harder pressed to justify their pro bono support to Ukraine, one U.S. executive explained to us, especially in the face mass layoffs at home. Displaced Ukrainian tech professionals are starting to put down roots in host countries abroad, and as many as 40 percent of their counterparts at home are considering emigrating, according to the IT Ukraine Association. As Kyiv’s focus turns — perhaps overly optimistically — from crisis to reconstruction, no one should assume that Ukraine’s digital solidarity can survive solely on the momentum from 2022.

Meanwhile, Kyiv still stands in need of the training, software licenses, service subscriptions and network defenses that Western firms continue to donate. But Ukrainian officials we spoke with chafe at the idea of long-term dependency on foreign firms and were far more eager to talk in terms of partnerships and investment than aid. Western tech firms are likely to look to policymakers for clues and cues on those prospects — including how investments might be both facilitated and insured.

The State Department’s recent efforts to build a faster, flexible model to get funding and know-how to places in crisis is a good first step. Western capitals and willing private sector partners should begin thinking about how to solidify and build upon the resilience that the past year of crisis response has yielded. Kyiv and Western partners will need to be disciplined in how and where they allocate aid and investment. The success of early ventures in Ukraine’s digital future will set the tone for what follows: Well-managed and transparent projects can sustain momentum and attract more interest, whereas large, lax investments that don’t pay off could discourage future investors or, worse, enable old patterns of corruption.

The good news is that a public-private “digital Marshall Plan” for Ukraine — a country that European Commission President Ursula von der Leyen has said is “already a rising tech hub” — won’t start from zero. “What we need now,” one senior Ukrainian official told us, “is sustainable solidarity.” But achieving and sustaining this future will require additional funding and focus. It will need continued resourcing from commercial interests. It will need U.S. and European governments to backstop and incentivize those commitments, to play a coordinating role and to remove bureaucratic hurdles. It will also take patience, prioritization and likely some painful tradeoffs from Kyiv.

In short, the staggering scale of cybersecurity and tech support that allies and partners have lent to Ukraine must now be framed less in terms of the immediate threats they’re helping defend against — more in terms of the democratic future they’re helping to build.

Arthur Nelson is the deputy director of the Technology and International Affairs Program at the Carnegie Endowment for International Peace, where Gavin Wilde is a senior fellow.

The post How cyber support to Ukraine can build its democratic future appeared first on CyberScoop.

]]>
What we know about Russian hackers — and how to stop them — after a year of cyberwar in Ukraine https://cyberscoop.com/russian-hackers-cyberwar-ukraine/ Fri, 07 Apr 2023 14:03:13 +0000 https://cyberscoop.com/?p=72921 Moscow's cyber operatives will target any nation supporting Ukraine, but a global coalition can win on the digital battlefield.

The post What we know about Russian hackers — and how to stop them — after a year of cyberwar in Ukraine appeared first on CyberScoop.

]]>
Since the beginning of 2022, when Russian hackers began to wage an intense cyberwar against our country, we have seen dozens of forecasts about how the events would unfold on the digital frontlines. Many predicted dire consequences for Ukraine as Russian hackers are considered among the most skilled in the world.

But most of those predictions vastly underestimated the resilience of Ukraine as well as the hackers, technologists and cyber strategists working together to counter Russian cyber operatives and their ongoing attacks. More than a year has passed and Ukraine has withstood Moscow’s cyber aggression. And we’ve managed to study our enemy’s techniques and tactics in cyberspace. This knowledge has become the foundation for an analytical report, Russia’s Cyber Tactics: Lessons Learned 2022, that is based on the experiences from the past year of warfare and what we expect to see from Moscow in cyberspace in the near future.

Russian hackers’ tactics are changing. For instance, there were many cyberattacks aimed at disrupting certain critical services prior to and at the beginning of Russia’s full-scale invasion of Ukraine. Those were also an instrument of informational and psychological warfare on Ukrainians designed to demoralize society.

Following the retreat of Russian troops from Kyiv, we started detecting an increasing number of attacks aimed at gathering information for espionage purposes. Russian military hackers are interested in any information they think can help Russia win this war. They prioritize quiet and long-term campaigns allowing them to stay inside systems and to maintain access data for as long as possible. This distinguishes them from so-called Russian “hacktivists” whose primary goal is informational impact, so they promptly disclose details of their attacks.

The countries that support Ukraine have also become targets for Russian military hackers and “hacktivists.” Their primary goals are to have a psychological impact on the democratic countries that support Ukraine. Given that, with a great deal of probability, we expect to see an increasing number of cyberespionage attacks, system infiltrations and data thefts in those countries.

Earlier this year, hackers attempted to spread spyware through phishing websites imitating official Ukrainian and Polish websites. That spyware was designed to take screenshots and enable data exfiltration. Furthermore, it also featured a task scheduler to ensure persistence. Attacks by experienced intelligence-related groups such as InvisiMole (associated by some with Russia’s foreign intelligence service) can go unnoticed for an extended period of time. Those are potentially the most dangerous types of operations. This is why government officials globally are at risk of being targeted by Russian cybercriminals. Any diplomat with access to sensitive data should be aware that they are in the crosshairs of Russian hackers.

No one can be sure that Russian hackers aren’t targeting them as Moscow is increasing its attacks on all sectors. We saw an increase in supply chain attacks through 2022, reaching a peak in the forth quarter. This trend continued into 2023, as well. Companies servicing the public sector and critical infrastructure operators such as software developers, internet service providers, etc., often fall victim to Russian hackers. Hence, protecting critical IT infrastructure is paramount.

Russian hackers are increasingly infiltrating systems by exploiting existing software vulnerabilities rather than through phishing attacks. By doing so, hackers are attempting to infiltrate as many systems as possible with plans to execute more invasive attacks in the future. These kinds of risks require all of us to remain vigilant, employ cyber hygiene practices and develop capabilities for patching vulnerable systems as soon as possible.

But we need to more than that if we are going to truly counter the growing cybersecurity threat coming out of Russia. We need to form a robust coalition of like-minded nations that stand up to Russia’s digital aggression, deploy the most aggressive sanctions against Russian President Vladimir Putin’s regime and to continue denying Russia access the latest technology, software and services that enable it to carry out its global campaign of cyberattacks and cyberwarfare on Ukraine and the rest of the world. 

Victor Zhora is deputy head of Ukraine’s State Service of Special Communication and Information Protection.

The post What we know about Russian hackers — and how to stop them — after a year of cyberwar in Ukraine appeared first on CyberScoop.

]]>
How a computer scientist talks to her daughter about TikTok  https://cyberscoop.com/tiktok-conversation-parents-national-security/ Fri, 31 Mar 2023 14:58:19 +0000 https://cyberscoop.com/?p=72743 The debate over TikTok's national security risk is lost on many young users, except if your mom is a technologist focused on global threats.

The post How a computer scientist talks to her daughter about TikTok  appeared first on CyberScoop.

]]>
Nadya Bliss and her 12-year-old daughter Coco have been talking about technology for as long as the two can remember. Nadya is a computer scientist who is also the executive director of the Global Security Initiative at Arizona State University. Technology and national security issues take up much of her time. While she loves tech and embraces many of its benefits, she is acutely aware of its darker sides, too. As a parent of a tween, the topic of social media — and especially TikTok — is commonplace in their household and among their friends. While many lawmakers and national security experts in Washington and elsewhere around the country are calling for an outright TikTok ban, those concerns are lost on the many millions of tweens and teens who spend hours on the app every day. Nadya and Coco, who is a sixth grader and among the minority of her peers without TikTok, recently talked about how the app — and the omnipresence of technology in just about every kid’s life today — is changing parenting and childhood. The following conversation between Nadya and Coco has been edited for clarity and length. 

Nadya: So, do most of the kids in your class have phones?

Coco: Yes, definitely. A lot of them since last year, some since fourth grade or even longer.

Nadya: Why do you think so many people have phones?

Coco: Well, eventually, kind of everybody has one, so everyone else wants one. It’s kind of like a peer pressure thing. And then there could be this situation where the parent just wants to connect with their kid.

Nadya: Like in my case, where the kid doesn’t really want to talk to me, but I missed you when you were at camp? What do you think about social media? And what do you know about social media from the conversations we’ve been having for apparently 12 million years?

Coco: Well, I know that it’s like a place to share things and post things. But then there are the likes and dislikes and comments, which can be really bad because sometimes you can share inappropriate things and just like bad things.

Nadya: Would you say everyone in your class has social media? Or what percentage of kids in your class?

Coco: So, there’s about 54 kids in my class. Most of them have social media. People who have social media kind of have a lot of social media, but the three main ones are TikTok, Instagram and BeReal.

Nadya: Do you have any of those apps? 

Coco: Nope. 

Nadya: So, why don’t you have them? 

Coco: Personally, I am not a fan. That’s my opinion. It’s just like we’re too young to be having this. I understand texting and stuff, but that’s different. Sometimes you don’t really know what you’re posting and then a weird old dude can find it and then you accidentally share your location.

Nadya: That is definitely a very creepy scenario, but that could possibly happen. You could by accident share your location and that’s dangerous. Are a lot of your friends on TikTok?

Coco: Yeah.

Nadya: You just made a really negative face. Is TikTok your least favorite?

Coco: Probably.

Nadya: Why?

Coco: Well, obviously, you’re the expert and you don’t like it. There’s the whole national security thing. 

Nadya: So, what do you think it means to be a national security threat? 

Coco: It’s probably just because it’s kind of collecting your data and the data isn’t protected. But there are other things. 

Nadya: Like what? Like how it affects people?

Coco: It affects the people who are on it. They start to change the way they talk. The slang they use. The way they move in general. If you’re on TikTok, you do TikTok dances. It’s just what you do.

Nadya: So, it just sort of sucks you in. 

Coco: I think it’s the endless scrolling ability and the short videos. The reason I like YouTube more is for the longer videos. It gives you like 5 seconds after one before recommending another one. The endless scrolling makes TikTok addicting. They have the algorithm. 

Nadya: You know what an algorithm is?

Coco: Yes because you told me. Basically, when you look at something, the app processes that data and then recommends things that are similar. So, that keeps you wanting to stay longer. 

Nadya: Do your friends know what an algorithm is? Do you think there’s enough education on how social media works? 

Coco: I don’t believe so. I think in tech class we covered it and then people freaked out that their iPads were listening. I was unfazed since I already knew about the algorithm. People may understand the extreme cases — like the creepy old man — but they may not understand the day-to-day aspects.

Nadya: It’s a bit creepy because it can push you into a particular direction. Like when you’re searching for something and then it nudges you in the direction of maybe something that could cause an eating disorder or something less sinister like encouraging you to buy Gucci. A whole thing about these platforms is that they essentially try to figure out a way to nudge you in certain directions and to keep you on the app. Potentially, it could encourage you to think a certain way or do something that maybe is not good or potentially adopt different viewpoints and ideologies that aren’t good. That’s a little scary. 

Coco: I think in China kids on TikTok see people getting good grades and things like that. We see people failing and hurting themselves and just doing stupid things. And there are the weird conspiracy theories. 

Nadya: Do kids talk about conspiracy theories?

Coco: A little bit. Most of it’s a joke. There was the whole Birds Aren’t Real thing.

Nadya: And you talked about that in school?

Coco: We talked about it in social studies. Basically, a guy said that pigeons aren’t real and then people started taking it seriously. And sometimes people like to post about how they found this bug and then found a microchip inside. Don’t worry, I know that’s not real. 

Nadya: Do you feel like you’re missing out? 

Coco: I don’t really feel left out because this is really my choice. And my closest friends aren’t really on social media a lot, and I just text them when I want to talk. 

Nadya: It’s true, we’ve never really put hard limits around technology, though you know I do have access to your phone. It’s always been kind of a constant conversation about how it works, some of the dangerous aspects, and why it’s important not to go overboard. And it seems like you’ve really taken those conversations to heart. Is there a social media thing that you want? You have YouTube, but you don’t really post. Is there something else you want?

Coco: One or two of my close friends were on TikTok but quit because I think the content was getting boring. I’ve thought about Instagram. The only reason I’d want it is because my friends are on it. But I can just text my friends if I want to connect. My friends do like Instagram chat. But that’s less secure than texting. 

Nadya: That’s true! Messages on Apple are encrypted end-to-end so nobody can read your messages if they are blue on an Apple phone to an Apple phone chat. Instagram has some protections but I’m not sure what they are. And then a third-party owns that data. So, even if it is encrypted, like who knows what happens with it. When do you think you’ll want social media?

Coco: Maybe when I’m a little older. I might like to see Meowed on Twitter to look at cats, or Evan and Katelyn updates or Taylor Swift. Maybe sooner for Taylor Swift — I heard she just launched an Instagram broadcast channel for the Eras Tour. I still probably wouldn’t post a lot.

The post How a computer scientist talks to her daughter about TikTok  appeared first on CyberScoop.

]]>
The pressing threat of Chinese-made drones flying above U.S. critical infrastructure  https://cyberscoop.com/chinese-drone-threat-dji-regulation-critical-infrastructure/ Thu, 23 Mar 2023 13:00:00 +0000 https://cyberscoop.com/?p=72461 Drones from China's DJI contain high-res cameras, advanced sensors and wireless access, opening the door for espionage and sabotage.

The post The pressing threat of Chinese-made drones flying above U.S. critical infrastructure  appeared first on CyberScoop.

]]>
Just look up on a clear day and there’s a good chance you’ll see one. Small and inexpensive drones have become omnipresent. While hobbyists use them for photography or racing, professionals now fly them to capture TV and film footage, conduct surveys or perform building inspections. While drones aren’t necessarily inherently dangerous or undesirable, the global market and supply chains that enable their production and sale are another story. 

Drones, or unmanned aerial systems, have been employed for decades in both military and civilian applications. Early UAS were large, expensive, custom designed for a specific purpose and required trained operators, limiting the market mostly to nation-states. As the price point dropped, users found innovative ways to leverage drone capabilities for improved operational efficiency, security and public safety.

Cost-effective quadcopters — or multirotor drones — have become popular with infrastructure and public safety organizations, especially as developments in miniaturization have enabled smaller, less expensive fleets of drones without sacrificing capability. China has moved to capitalize on the miniaturization movement and the demand for compact, economical, high-performance drones. China dominates the global market for commercial drones, with a majority produced by DJI, or Da Jang Innovations, a technology company started in 2006 in Shenzhen, China, by young technology entrepreneur Frank Wang.

In less than a decade, DJI quickly eclipsed global competitors in market share for low-cost, reliable drones. As late as 2021, DJI controlled more than 70% of the U.S. hobby drone market, and up to 90% of the U.S. commercial drone market, which includes many public safety agencies and infrastructure operators. The company’s rapid growth and market dominance led observers to speculate that the Chinese government had been investing in the company, helping it both to scale production and accelerate impressive technological advantage. 

DJI maintains that it is privately held and that it has not received “direct” investments from the Chinese government. However, research by IVPM, an independent physical security technology research organization, found that DJI received infusions from at least four investment firms “owned or administered by Beijing” since 2018, including one from a Chinese state asset manager that plays a key role in Chinese military-civil partnerships.

DJI’s massive U.S. market share, and in particular its popularity among public safety organizations and infrastructure owners and operators, combined with compelling evidence of strong ties to the Chinese government, has raised concern among national security experts and government officials alike. DJI also has access to cheap microprocessors and raw materials, as well as cheap labor. This access, arising from its proximity to China’s “Silicon Valley’ and alleged subsidies from the Chinese government, provide DJI a significant advantage over most companies and entrepreneurs.

Because of how they are deployed operationally, drones have inherently unique access to sensitive system and enterprise information. Organizations use this information to improve operations, secure U.S. critical infrastructure, identify and mitigate vulnerabilities, conduct search and rescue operations, and respond to natural disasters. Drones provide the data and imagery used for vital decision-making and planning. However, in the hands of the adversary, that same data offers the potential for data exfiltration, espionage and exploitation. And this is exactly the concern with Chinese-manufactured drone technology, specifically DJI.

DJI’s high-resolution optical and thermal cameras, advanced sensor packages, access to enterprise wireless networks, small size and high maneuverability make them sophisticated systems for spying as well. Simply put, DJI drones provide potential platforms for Chinese espionage and sabotage.

Citing “increased awareness of cyber vulnerabilities associated with DJI products,” the U.S. Army Aviation unit instituted a partial grounding of the platforms in August 2017. In May 2018, again citing cybersecurity concerns, the Department of Defense suspended all procurement of commercial off-the-shelf drones. Referencing the 2018 action, in 2021, the DOD reaffirmed the position that “systems produced by Da Jiang Innovations pose potential threats to national security.” Further, the Fiscal Year 2020 National Defense Authorization Act codified into law a provision banning the purchase and use of drones and components manufactured in countries known to be national security threats. The 2023 NDAA prohibits the DOD and its contractors from using Chinese-manufactured surveillance drones.

The security concerns and guidance extend to industry, as well. In May 2019, the Cybersecurity and Infrastructure Security Agency issued an alert to industry highlighting the cybersecurity concerns of Chinese-manufactured drones. Later that year, the Department of the Interior grounded its entire fleet of Chinese-manufactured drones, citing national security concerns. In 2020, the Department of Commerce added DJI to its Entity List, which bans US-based companies from exporting technology to companies designated as a national security concern. DJI was subsequently prohibited from exhibiting at CES, one of the biggest annual tech events, because of its designation on the Commerce Entity List

But that’s not enough. In addition to continuing to press for implementation of the mitigation measures outlined in CISA’s 2019 alert to industry, the U.S. government and regulatory agencies should prohibit the new procurement of Chinese-manufactured drone technologies and urge all regulated critical infrastructure owners and operators to immediately remove such technologies from their fleets. Furthermore, the U.S. government should clearly and emphatically advise all public safety agencies of the risks of Chinese manufactured drone systems and encourage them, in coordination with state and local officials, to discontinue use in a manner that does not further endanger public safety. Agencies and legislatures with jurisdiction should also move expeditiously to prohibit new acquisition of Chinese-manufactured drone technologies and identify alternative capabilities to support public safety operations.

The urgency around this threat could not be greater given the mission-critical roles of infrastructure owners and operators and public safety organizations. We therefore ask lawmakers and policymakers not only to revisit the issue, as Sens. Mark Warner, D. Va., Marsha Blackburn, R. Tenn., and a bi-partisan group of their colleagues urged in a letter to CISA last week, but also to work with industry, as well as state, local, tribal and territorial governments, to outline and implement a comprehensive approach to enable the elimination of all drones manufactured by companies with ties to the Chinese Communist Party from critical infrastructure and public safety inventories and supply chains.

Brian Harrell is the former assistant secretary for infrastructure protection at the U.S. Department of Homeland Security.

Travis Moran is a retired federal law enforcement officer and security adviser for the electricity subsector.

The post The pressing threat of Chinese-made drones flying above U.S. critical infrastructure  appeared first on CyberScoop.

]]>
The US cybersecurity strategy won’t address today’s threats with regulation alone https://cyberscoop.com/national-cybersecurity-strategy-regulation/ Thu, 16 Mar 2023 15:34:39 +0000 https://cyberscoop.com/?p=72314 The Biden administration needs to foster greater public-private collaboration, involve global partners and help build the cyber workforce to fight growing digital threats. 

The post The US cybersecurity strategy won’t address today’s threats with regulation alone appeared first on CyberScoop.

]]>
Plenty of “unidentified flying objects” have appeared in the news over the past several weeks, yet cybersecurity professionals will tell you that we don’t need to look up to find a more daunting and real threat to national security.

Fortunately, President Biden just released the administration’s national cyber strategy. Coupled with industry collaboration, it’s an effective approach that represents a new hope for a safer and more economically prosperous future. Furthermore, the strategy is a much-needed step toward a clear roadmap for collaboration between agencies and industry partners, particularly in the technology sector. Prior federal cybersecurity strategic documents have lacked specificity, materially undermining their successful implementation and inhibiting stakeholder engagement. 

But it is discouraging to those of us on the frontlines of cybersecurity to see that the strategy places so many of its “eggs” in the “basket” of regulation. As we hear more from the administration on their strategy, it is critical that the federal government articulates a vision of what specific gaps can be filled by new regulation. In addition, I urge the administration to follow through on its stated intention to harmonize, streamline and deconflict any new or existing regulations. We need clear and effective rules of the road. And, if much of the responsibility for defending cyberspace is to lie with the “most capable and best-positioned actors” in the public and private sectors, it is important that the administration follows through on its stated intention to involve industry in this vital conversation.

A successful strategy must also take into account the U.S. government’s responsibility to get its cyber house in order, too. The strategy notes that this will require real investment on the part of key government agencies. Congress and the administration must rise to this shared challenge and offer long-term sustainable investments. 

On a more basic level, I applaud the administration for developing clear and measurable goals and hope that the promised implementation plan will deliver ambitious yet realistic timelines. As the old saying goes, “If you can’t measure it, you can’t improve it.” Without tracking progress in cyber risk reduction, the strategy will be nothing more than a thought experiment. 

Speaking of risk, the cyber sector is increasingly global — and cybersecurity requires the engagement of countries around the world. It is critical that the new strategy delivers on the promised future vision for international, public and private collaboration — including the role of international standards bodies. 

Lastly, and perhaps most importantly, this strategy is an opportunity to demonstrate that the U.S. cybersecurity workforce is a top priority for the administration. Our economy depends on innovative companies in the cybersecurity sector for unprecedented opportunities and prosperity. In real terms, this means good-paying jobs for Americans from all walks of life.

With this new strategy, industry will continue to step up to the plate to equip a new U.S. cybersecurity workforce to maintain our nation’s security and defend the digital and traditional economy, software vulnerabilities, and infrastructure. We don’t need a patchwork of disparate regulations, rather we need a consistent set of standards that allow for industry to drive security and resilience. 

It is my hope and expectation that with this new approach, industry and government can come together to deliver comprehensive cybersecurity that is consistent, reflects a constantly evolving threat landscape and incorporates the interconnected, global nature of today’s digital environment. 

With the hard work of cybersecurity professionals, the leadership of the technology industry and the strategic support of the federal government, we can work toward a U.S. cybersecurity posture that is fit for purpose and reflects the constantly evolving global threat landscape. Our economy and national security depend on it. Let’s get to work. 

Jason Oxman is the president and CEO of the Information Technology Industry Council (ITI).

The post The US cybersecurity strategy won’t address today’s threats with regulation alone appeared first on CyberScoop.

]]>
New cyber reality: With great interdependence comes great liability https://cyberscoop.com/national-cybersecurity-strategy-liability-opinion/ Thu, 02 Mar 2023 20:24:37 +0000 https://cyberscoop.com/?p=71960 Biden's cybersecurity strategy rightly advocates for more regulation. For companies doing security right, there’s no need to panic.

The post New cyber reality: With great interdependence comes great liability appeared first on CyberScoop.

]]>
For more than a decade, government leaders have grappled with an insurmountable reliance on digital technologies and communications without an aggressive approach to security. Technology vendors have pushed their products to market under the guise that liability shifts once products are delivered, bolstering their position in the marketplace with security by design or after-market protections. Security products and partnerships offer a complex add-on tapestry to backstop the black hole that is identifying and mitigating every potential threat or exploit.  

In security consulting, there’s an adage suggesting a 60/40 rule when analyzing sectors’ willingness to sink costs into impending regulation without a forcing mechanism. Sixty percent of companies will likely wait and see how 40% of leading companies respond. For cybersecurity regulations, it’s more likely 80/20. The national cybersecurity strategy released Thursday decidedly states that’s not good enough. While there’s clearly room for improvement at every level, companies already taking cybersecurity seriously should not be panic-stricken by the new strategy document.  

Mobilizing concurrent themes  

The national cybersecurity strategy was not released in a vacuum. U.S. agencies such as the Cybersecurity and Infrastructure Security Agency or the National Institute for Standards and Technology have updated various strategies, standards, recommendations and best practices in the past year. The new Network and Information Security Directive, or NIS2, in Europe tasks member states with a strategic cyber reassessment. It suggests entities assess the proportionality of their risk management activities, consider their individual degree of exposure to risks and the societal and economic impacts stemming from potential incidents. Meanwhile, the United Nations is attempting to improve international law enforcement capacity in cybersecurity, most notably by centering on legal specificity around “intent” and “intentionality” when actors or groups carry out potential criminal activities. 

Owners and operators of critical infrastructure — oil and gas, power and utilities, water treatment and purification facilities, manufacturing, transportation, hospitals, connected buildings and more — are responsible for securing their operations and processes from the inside out, with assorted regulatory and compliance requirements within and across each sector. Critical infrastructure cybersecurity presents a massive needle in a haystack problem. Where information technology sees many vulnerabilities likely to be exploited in similar ways across mainstream and ubiquitous systems, operational technology is often a proprietary case-by-case distinction. The oversimplification of their differences leads to a contextual gap when translating roles and responsibilities into tasks and capabilities for government, and business continuity and disaster recovery for industry. 

Visibility gaps across critical and interdependent sectors allow for the threat landscape to continue to grow. The prevailing argument that market forces are not enough is married to the fact that some data does suggest that regulation can work. As cybersecurity experts regularly point out, it has always been the how that matters most. If we’re talking about available data, we only have regulatory data from regulated industries. In the same vein, we have more attack data in sectors where technologies and communications provide data to analyze, i.e., have security logs, tools and monitoring. 

Sector-specific security mandates have been on the table predating the release of the national strategy. The strategy overwhelmingly welcomes private sector input, workforce enhancement, vendor cooperation and security by design or cyber-informed engineering indoctrination. How to measure success will mark the real impacts and/or failures of the strategy in the years to come. That said, the agencies and authorities outlined above have always struggled with getting cyber metrics right, even with audacious goal setting. 

Stones thrown at glass houses  

The new strategy begs a holistic and prudent use of institutional baggage and attention to security neglect. Unfortunately, there is no single source of truth to turn to for advice across the myriad of agencies and authorities interoperating at the federal level. Each company therefore must identify internal teams or champions who act as their own independent advisors, having to do literature reviews and consensus mapping cross-referencing relevant standards, regulations, suggestions and best practices. Security leaders and teams then must map: 

  • The status of their security program, risk ownership, and visibility gaps 
  • Existing management and mitigation tools, resources, and capacity
  • The development environment of third-party products and security management of suppliers  
  • Enterprise content management, data security and PII
  • Operational products and services, hardware, software, IoT, cloud offerings, etc.  
  • Upstream and downstream supply chain 
  • Operational technology and cyber-physical security   
  • The sea of available add-on security products 

This status quo continues to confuse stakeholders — by accident or design — as to which party is in the best position to avoid losses. It comes as no surprise that this model has not served any industry well. Risk management has countless starting points with no finish line. Risk tolerance therefore is a cycle of entities mapping necessary security components of their organization, attempting to understand how those components fulfill various portions of existing standards, regulations, suggestions and best practices, while hoping compliance regimes measure the right things as necessary to have — which are ultimately industry-specific and therefore recreate the cycle.  

The forest or the trees?  

The government has an undeniable duty to direct efforts and regulations to avoid worst-case scenarios, in the physical world, cyberspace and their convergence. To date, roles and responsibilities from government efforts have not translated into appropriate tasks and capabilities for implementation. Government entities want software and hardware inventories, mapped CVEs, tracked threat intelligence and bottom-up situational awareness, but lack the capacity to collect this security census data on a national level. Industry has the individual capacity to collect this information. The government’s latest strategy implores them to do so.  

There is an indisputable disconnect in functional context for overarching federal cyber governance. Debating the hot-button issues of vendor hacking-back and making companies liable for software insecurity doesn’t help us help more asset owners across critical infrastructure get security right. As previously pointed out, “there is a thriving global private sector for cybersecurity products and solutions which is increasingly lucrative and largely unregulated.” If iron sharpens iron, software vendors and technology manufactures should absolutely hold themselves to a standard of care. If mistakes are made along the way, heads do not need to roll for better lessons to be learned and for those lessons to lead to actionable outcomes. Regardless of the national strategy, the “basics” for which components of a security program are necessary remain relatively unchanged. 

Danielle Jablanski is an OT cybersecurity strategist at Nozomi Networks and a nonresident fellow at the Cyber Statecraft Initiative of the Atlantic Council’s Scowcroft Center for Strategy and Security.

The post New cyber reality: With great interdependence comes great liability appeared first on CyberScoop.

]]>