Hackers could shut down satellites – or turn them into weapons

Two CubeSats, part of a constellation built and operated by Planet Labs Inc. to take images of Earth, were launched from the International Space Station on May 17, 2016. NASA

William Akoto, University of Denver

Last month, SpaceX became the operator of the world’s largest active satellite constellation. As of the end of January, the company had 242 satellites orbiting the planet with plans to launch 42,000 over the next decade. This is part of its ambitious project to provide internet access across the globe. The race to put satellites in space is on, with Amazon, U.K.-based OneWeb and other companies chomping at the bit to place thousands of satellites in orbit in the coming months.

These new satellites have the potential to revolutionize many aspects of everyday life – from bringing internet access to remote corners of the globe to monitoring the environment and improving global navigation systems. Amid all the fanfare, a critical danger has flown under the radar: the lack of cybersecurity standards and regulations for commercial satellites, in the U.S. and internationally. As a scholar who studies cyber conflict, I’m keenly aware that this, coupled with satellites’ complex supply chains and layers of stakeholders, leaves them highly vulnerable to cyberattacks.

If hackers were to take control of these satellites, the consequences could be dire. On the mundane end of scale, hackers could simply shut satellites down, denying access to their services. Hackers could also jam or spoof the signals from satellites, creating havoc for critical infrastructure. This includes electric grids, water networks and transportation systems.

Some of these new satellites have thrusters that allow them to speed up, slow down and change direction in space. If hackers took control of these steerable satellites, the consequences could be catastrophic. Hackers could alter the satellites’ orbits and crash them into other satellites or even the International Space Station.

Commodity parts open a door

Makers of these satellites, particularly small CubeSats, use off-the-shelf technology to keep costs low. The wide availability of these components means hackers can analyze them for vulnerabilities. In addition, many of the components draw on open-source technology. The danger here is that hackers could insert back doors and other vulnerabilities into satellites’ software.

The highly technical nature of these satellites also means multiple manufacturers are involved in building the various components. The process of getting these satellites into space is also complicated, involving multiple companies. Even once they are in space, the organizations that own the satellites often outsource their day-to-day management to other companies. With each additional vendor, the vulnerabilities increase as hackers have multiple opportunities to infiltrate the system.

CubeSats are small, inexpensive satellites. Svobodat/Wikimedia Commons, CC BY

Hacking some of these CubeSats may be as simple as waiting for one of them to pass overhead and then sending malicious commands using specialized ground antennas. Hacking more sophisticated satellites might not be that hard either.

Satellites are typically controlled from ground stations. These stations run computers with software vulnerabilities that can be exploited by hackers. If hackers were to infiltrate these computers, they could send malicious commands to the satellites.

A history of hacks

This scenario played out in 1998 when hackers took control of the U.S.-German ROSAT X-Ray satellite. They did it by hacking into computers at the Goddard Space Flight Center in Maryland. The hackers then instructed the satellite to aim its solar panels directly at the sun. This effectively fried its batteries and rendered the satellite useless. The defunct satellite eventually crashed back to Earth in 2011. Hackers could also hold satellites for ransom, as happened in 1999 when hackers took control of the U.K.‘s SkyNet satellites.

Over the years, the threat of cyberattacks on satellites has gotten more dire. In 2008, hackers, possibly from China, reportedly took full control of two NASA satellites, one for about two minutes and the other for about nine minutes. In 2018, another group of Chinese state-backed hackers reportedly launched a sophisticated hacking campaign aimed at satellite operators and defense contractors. Iranian hacking groups have also attempted similar attacks.

Although the U.S. Department of Defense and National Security Agency have made some efforts to address space cybersecurity, the pace has been slow. There are currently no cybersecurity standards for satellites and no governing body to regulate and ensure their cybersecurity. Even if common standards could be developed, there are no mechanisms in place to enforce them. This means responsibility for satellite cybersecurity falls to the individual companies that build and operate them.

Market forces work against space cybersecurity

SpaceX, headquartered in Hawthorne, Calif., plans to launch 42,000 satellites over the next decade. Bruno Sanchez-Andrade Nuño/Wikimedia Commons, CC BY

As they compete to be the dominant satellite operator, SpaceX and rival companies are under increasing pressure to cut costs. There is also pressure to speed up development and production. This makes it tempting for the companies to cut corners in areas like cybersecurity that are secondary to actually getting these satellites in space.

Even for companies that make a high priority of cybersecurity, the costs associated with guaranteeing the security of each component could be prohibitive. This problem is even more acute for low-cost space missions, where the cost of ensuring cybersecurity could exceed the cost of the satellite itself.

To compound matters, the complex supply chain of these satellites and the multiple parties involved in their management means it’s often not clear who bears responsibility and liability for cyber breaches. This lack of clarity has bred complacency and hindered efforts to secure these important systems.

Regulation is required

Some analysts have begun to advocate for strong government involvement in the development and regulation of cybersecurity standards for satellites and other space assets. Congress could work to adopt a comprehensive regulatory framework for the commercial space sector. For instance, they could pass legislation that requires satellites manufacturers to develop a common cybersecurity architecture.

They could also mandate the reporting of all cyber breaches involving satellites. There also needs to be clarity on which space-based assets are deemed critical in order to prioritize cybersecurity efforts. Clear legal guidance on who bears responsibility for cyberattacks on satellites will also go a long way to ensuring that the responsible parties take the necessary measures to secure these systems.

Given the traditionally slow pace of congressional action, a multi-stakeholder approach involving public-private cooperation may be warranted to ensure cybersecurity standards. Whatever steps government and industry take, it is imperative to act now. It would be a profound mistake to wait for hackers to gain control of a commercial satellite and use it to threaten life, limb and property – here on Earth or in space – before addressing this issue.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can get our highlights each weekend.]

William Akoto, Postdoctoral Research Fellow, University of Denver

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI could constantly scan the internet for data privacy violations, a quicker, easier way to enforce compliance

You leave bits of your personal data behind online, and companies are happy to trade in them. metamorworks/ iStock/Getty Images Plus

Karuna Pande Joshi, University of Maryland, Baltimore County

You’re trailing bits of personal data – such as credit card numbers, shopping preferences and which news articles you read – as you travel around the internet. Large internet companies make money off this kind of personal information by sharing it with their subsidiaries and third parties. Public concern over online privacy has led to laws designed to control who gets that data and how they can use it.

The battle is ongoing. Democrats in the U.S. Senate recently introduced a bill that includes penalties for tech companies that mishandle users’ personal data. That law would join a long list of rules and regulations worldwide, including the Payment Card Industry Data Security Standard that regulates online credit card transactions, the European Union’s General Data Protection Regulation, the California Consumer Privacy Act that went into effect in January, and the U.S. Children’s Online Privacy Protection Act.

Internet companies must adhere to these regulations or risk expensive lawsuits or government sanctions, such as the Federal Trade Commission’s recent US$5 billion fine imposed on Facebook.

But it is technically challenging to determine in real time whether a privacy violation has occurred, an issue that is becoming even more problematic as internet data moves to extreme scale. To make sure their systems comply, companies rely on human experts to interpret the laws – a complex and time-consuming task for organizations that constantly launch and update services.

My research group at the University of Maryland, Baltimore County, has developed novel technologies for machines to understand data privacy laws and enforce compliance with them using artificial intelligence. These technologies will enable companies to make sure their services comply with privacy laws and also help governments identify in real time those companies that violate consumers’ privacy rights.

Before machines can search for privacy violations, they need to understand the rules. Imilian/iStock/Getty Images Plus

Helping machines understand regulations

Governments generate online privacy regulations as plain text documents that are easy for humans to read but difficult for machines to interpret. As a result, the regulations need to be manually examined to ensure that no rules are being broken when a citizen’s private data is analyzed or shared. This affects companies that now have to comply with a forest of regulations.

Rules and regulations often are ambiguous by design because societies want flexibility in implementing them. Subjective concepts such as good and bad vary among cultures and over time, so laws are drafted in general or vague terms to allow scope for future modifications. Machines can’t process this vagueness – they operate in 1’s and 0’s – so they cannot “understand” privacy the way humans do. Machines need specific instructions to understand the knowledge on which a regulation is based.

One way to help machines understand an abstract concept is by building an ontology, or a graph representing the knowledge of that concept. Borrowing the concepts of ontology from philosphy, new computer languages, such as OWL, have been developed in AI. These languages can define concepts and categories in a subject area or domain, show their properties and show the relations among them. Ontologies are sometimes called “knowledge graphs,” because they are stored in graphlike structures.

An example of a simple knowledge graph. Karuna Pande Joshi, CC BY-ND

When my colleagues and I began looking at the challenge of making privacy regulations understandable by machines, we determined that the first step would be to capture all the key knowledge in these laws and create knowledge graphs to store it.

Extracting the terms and rules

The key knowledge in the regulations consists of three parts.

First, there are “terms of art”: words or phrases that have precise definitions within a law. They help to identify the entity that the regulation describes and allow us to describe its roles and responsibilities in a language that computers can understand. For example, from the EU’s General Data Protection Regulation, we extracted terms of art such as “Consumers and Providers” and “Fines and Enforcement.”

Next, we identified Deontic rules: sentences or phrases that provide us with philosophical modal logic, which deals with deductive behavior. Deontic (or moral) rules include sentences describing duties or obligations and mainly fall into four categories. “Permissions” define the rights of an entity/actor. “Obligations” define the responsibilities of an entity/actor. “Prohibitions” are conditions or actions that are not allowed. “Dispensations” are optional or nonmandatory statements.

The researchers’ application automatically extracted Deontic rules, such as permissions and obligations, from two privacy regulations. Entities involved in the rules are highlighted in yellow. Modal words that help identify whether a rule is a permission, prohibition or obligation are highlighted in blue. Gray indicates the temporal or time-based aspect of the rule. Karuna Pande Joshi, CC BY-ND

To explain this with a simple example, consider the following:

  • You have permission to drive.
  • But to drive, you are obligated to get a driver’s license.
  • You are prohibited from speeding (and will be punished if you do so).
  • You can park in areas where you have the dispensation to do so (such as paid parking, metered parking or open areas not near a fire hydrant).

Some of these rules apply to everyone uniformly in all conditions; while others may apply partially, to only one entity or based on conditions agreed to by everyone.

Similar rules that describe do’s and don’ts apply to online personal data. There are permissions and prohibitions to prevent data breaches. There are obligations on the companies storing the data to ensure its safety. And there are dispensations made for vulnerable demographics such as minors.

A knowledge graph for GDPR regulations. Karuna Pande Joshi, CC BY-ND

My group developed techniques to automatically extract these rules from the regulations and save them in a knowledge graph.

Thirdly, we also had to figure out how to include the cross references that are often used in legal regulations to reference text in another section of the regulation or in a separate document. These are important knowledge elements that should also be stored in the knowledge graph.

Rules in place, scanning for compliance

After defining all the key entities, properties, relations, rules and policies of a data privacy law in a knowledge graph, my colleagues and I can create applications that can reason about the data privacy rules using these knowledge graphs.

These applications can significantly reduce the time it will take companies to determine whether they are complying with the data protection regulations. They can also help regulators monitor data audit trails to determine whether companies they oversee are complying with the rules.

This technology can also help individuals get a quick snapshot of their rights and responsibilities with respect to the private data they share with companies. Once machines can quickly interpret long, complex privacy policies, people will be able to automate many mundane compliance activities that are done manually today. They may also be able to make those policies more understandable to consumers.

Karuna Pande Joshi, Assistant Professor of Information Systems, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Billions are pouring into mobility technology – will the transport revolution live up to the hype?

Toshifumi Hotchi/Shutterstock

Neil G Sipe, The University of Queensland

Over the past decade almost US$200 billion has been invested globally in mobility technology that promises to improve our ability to get around. More than US$33 billion was invested last year alone. Another measure of interest in this area is the number of unicorns, which has doubled in the past two years.

A unicorn is a privately held startup company valued at US$1 billion or more. In early 2018 there were 22 travel and mobility unicorns. By last month the number had grown to 44.


The top categories in the mobility area are: ride hailing, with 11 unicorns (25.0%); autonomous vehicles, with ten (22.7%); and micromobility, with three (6.8%). The remaining 20 unicorns are in the travel category (hotels, bookings and so on).

Mobility technology is more than just autonomous vehicles, ride hailing and e-scooters and e-bikes. It also includes: electrification (electric vehicles, charging/batteries); fleet management and connectivity (connectivity, data management, cybersecurity, parking, fleet management); auto commerce (car sharing); transportation logistics (freight, last-mile delivery); and urban air mobility.

Promised solutions, emerging problems

Much of the interest in mobility technology is coming from individuals outside the transport arena. Startups are attracting investors by claiming their technology will solve many of our transport problems.

Micromobility companies believe their e-scooters and e-bikes will solve the “first-mile last-mile” problem by enabling people to move quickly and easily between their homes or workplaces and a bus or rail station. While this might work in theory, it depends on having safe and segregated bicycle networks and frequent and widely accessible public transport services.

Ride-hailing services might relieve people of the need to own a car. But there is evidence to suggest these services are adding to traffic congestion. That’s because, unlike taxis, more of their time on the road involves travelling without any passengers.

Navigation tools (Google Maps, Apple Maps, Waze) have been around longer than most other mobility technologies and are meant make it easier to find the least-congested route for any given trip. However, research suggests these tools might not be working as intended. The backlash against them is growing in some cities because traffic is being directed onto neighbourhood streets rather than arterial roads.

Autonomous vehicles have the goal of reducing injuries and deaths from car crashes. Only a few years ago many bold predictions were being made that these self-driving vehicles would be having positive impacts by now, but this hasn’t happened. The enthusiasm for autonomous vehicles has cooled. Some now believe we won’t see many of the social benefits for decades.


The final mobility tech area is known as mobility as a service (MAAS). It’s basically a platform designed to make better use of existing infrastructure and transport modes. MAAS begins with a journey planner that is linked to one-stop payment for a range of mobility services – ride-hailing, e-scooters, e-bikes, taxis, public transport, and so on.

MAAS is the newest entrant in the mobility tech field. It has attracted US$6.8 billion to date, but is expected to grow to over US$100 billion by 2030. This idea is creating great enthusiasm, not only among private entrepreneurs, but also in the public sector. It’s too early to know whether it will improve transportation.


3 trends are driving investment

So, why do venture capitalists continue to show so much interest in mobility technology startups despite poor company performance to date? It appears they believe personal mobility will become increasingly important. Three trends support this belief.

First, urban dwellers increasingly value the ability to move around easily. It’s thought to be a key ingredient for a liveable city. The problem is public transport is often not very good, particularly in the US and in outer suburbs in Australia.

This is due to historically low funding relative to roads. The prospect of more funding and better public transport services in the future is not good. In part that’s because many view public transport as welfare and not an essential public service. Thus, if cities want to become more liveable and competitive, they must look beyond government-funded public transport for other mobility alternatives.


The second trend is declining vehicle ownership. Since 1986 US sales of car and light trucks per capita have dropped by almost 30%. In Australia, new car sales remained relatively constant over the past decade, but a decline since 2017 is expected to continue. These trends are due in part to the cost of owning a vehicle, but also because of a growing view that owning a car may not be necessary.

This brings us to the third trend, which involves demographics and the post-millennial desire for access to mobility services rather than vehicle ownership.


These trends, combined with expectations of an upward trend in prices of these services, suggests there may be good times ahead for ride-hailing and micromobility companies. It also means venture capital funding for these startups will not be diminishing in the near future.

The future of transport isn’t simple

Transport systems are multifaceted. No one single app or technology will solve the challenges. And, as we are discovering, some of the purported solutions to problems might actually be making the situation worse.

If the goal is to get people out of their cars (for better health and quality of life and a better environment), this will require more than a technology. Better infrastructure and public policies (including better integration of land uses and transport to reduce the need for transport) will be required – congestion pricing being one of those.

That is not to say technological innovations are not welcome as part of the solution, but they are just that … “part” of the solution.

Neil G Sipe, Adjunct Researcher in Transport and Planning, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Wisemen on GSA!

The Wisemen Company (Wisemen Multimedia, LLC, EIN 27-2836493, DUNS 034-931-168), an SBA HUBZone certified, minority-owned, small business concern has been awarded GSA Contract No. 47QTCA20D0024 under GSA Schedule 70 GENERAL PURPOSE COMMERCIAL INFORMATION TECHNOLOGY, EQUIPMENT, SOFTWARE, AND SERVICES.

Wisemen Multimedia LLC SBA HUBZone Certified Business