WhatsNew2Day
Latest News And Breaking Headlines

Answering the 12 biggest questions about Apple and Google’s new coronavirus tracking project

On Friday, Google and Apple teamed up for an ambitious emergency project, with a new protocol to track the ongoing coronavirus outbreak. It is an urgent, complex project, with enormous consequences for privacy and public health. Similar projects have been successful in Singapore and other countries, but it remains to be seen whether U.S. public health authorities can manage such a project – even if the world’s largest technology companies lend a hand.

We’ve covered the basics of the project here, but there’s a lot more to dig into – starting with the tech documents published by the two companies. They reveal a lot about what Apple and Google are actually trying to do with this sensitive data and where the project falls short. So we dived into those applications and tried to answer the twelve most pressing questions, starting from the very beginning:

What does this do?

When someone gets sick with a new disease like this year’s coronavirus, public health workers try to limit its spread by detecting and quarantining everyone the infected person has come into contact with. This is called contact tracking and it is a critical tool in controlling outbreaks.

Essentially, Apple and Google have built an automated contact tracking system. It differs from conventional contact tracking and is probably most useful when combined with conventional methods. Most importantly, it can operate on a much larger scale than conventional contact tracking, which will be necessary given the extent to which the outbreak has spread in most countries. Since it comes from Apple and Google, some of this functionality will eventually also be built into OS-level Android and iPhones. That may make this technical solution available to more than three billion phones around the world – something that would otherwise be impossible.

It’s important to note that what Apple and Google are working on together is a framework and not an app. They handle the plumbing and guarantee the privacy and security of the system, but leave the actual apps that use it to others.

How does it work?

Basically, this system allows your phone to log other phones nearby. As long as this system is active, your phone will periodically blow away a small, unique and anonymous piece of code derived from that phone’s unique ID. Other phones in range receive and remember that code by compiling a log of the codes they received and when they received them.

When a person using the system receives a positive diagnosis, he can choose to send his ID code to a central database. When your phone returns with that database, a local scan is performed to see if any of the codes in the log match the IDs in the database. If there is a match, you will be notified on your phone that you have been exposed.

That’s the simple version, but you can already see how useful this kind of system can be. Essentially, it allows you to capture contact points (i.e. exactly what contact tracers need) without collecting accurate location data and keeping only minimal information in the central database.

How do you claim to be infected?

The documents released are less detailed on this point. Her adopted in the specification that only legitimate caregivers can diagnose so that only confirmed diagnoses generate warnings. (We don’t want trolls and hypochondriacs to flood the system.) It’s not entirely clear how that will happen, but it seems to be a problem that can be solved, whether it’s managed through the app or some kind of additional authentication before centrally recording an infection.

How does the phone send those signals?

The short answer is: Bluetooth. The system works on the same antennas as your wireless earbuds, although it’s the Bluetooth Low Energy (BLE) version of the spec, meaning it won’t drain your battery as noticeably. This specific system uses a version of the BLE Beacon system which has been in use for years, adapted to work as a two way code switching between phones.

How far does the signal reach?

We don’t really know yet. In theory, BLE can register connections up to 100 meters away, but it depends heavily on specific hardware settings and is easily blocked by walls. Many of the most common uses of BLE – such as pairing an AirPods case with your iPhone – have an effective range closer to six inches. Engineers of the project are optimistic that they can adjust the range at the software level through “threshold lowering” – essentially throwing out lower strength signals – but since there is no real software yet, most relevant decisions have yet to be made.

At the same time, we are not entirely sure what the best range is for this type of notification. Social distance rules typically assume to stay six feet from others in public, but that could easily change as we learn more about how the new coronavirus spreads. Officials will also be wary of sending so many alerts that the app becomes useless, which could narrow the ideal range even further.

So is it an app?

Kind of. In the first part of the project (which is expected to be completed by mid-May), the system will be built into official public health apps, which will issue the BLE signals in the background. Those apps will be built by state-level health agencies, not tech companies, meaning the agencies are in charge of many important decisions about how to inform users and what to recommend if a person is exposed.

Ultimately, the team hopes to build that functionality directly into the iOS and Android operating systems, similar to a native dashboard or a toggle in the Settings menu. But that will take months and it will still prompt users to download an official public health app if they need to submit information or receive an alert.

Is this really safe?

Usually the answer seems to be yes. Based on the documents published Friday, it will be quite difficult to work back to sensitive information based solely on the Bluetooth codes, meaning you can run the app in the background without worrying about compiling something that might be burdensome. The system itself does not identify you personally and does not register your location. Of course, the health apps that use that system must ultimately know who you are if you want to upload your diagnosis to health officials.

Can hackers use this system to make a large list of everyone who has had the disease?

This would be very difficult, but not impossible. The central database stores all the codes sent by infected people while they were contagious (that’s what your phone is looking at), and it is perfectly plausible that a bad actor could get those codes. The engineers have done a good job of ensuring that you cannot work directly from those codes to a person’s identity, but it is possible to imagine some scenarios in which those protections fall apart.

A diagram from the white paper on cryptography explaining the three levels of the key

To explain why, we need to get a little more technical. The cryptographic specification records three levels of keys for this system: a private master key that never leaves your device, a daily trace key generated from the private key, and then the string of “proximity IDs” generated by the daily key. Each of these steps is performed through a cryptographically robust one-way function, so you can generate a proxy key with a daily key, but not the other way around. More importantly, you can see which proxy keys come from a specific daily key, however nothing but if you start with the daily key in hand.

The log on your phone is a list of proximity IDs (the lowest key level) so they are not that good on their own. If you test positive, you share even more and post the daily keys for each day you were contagious. Since those daily keys are now public, your device can do the math and tell you if any of the proximity IDs in your log come from that daily key; if they did, it will generate a warning.

Such as cryptographer Matt Tait points out, this leads to a meaningful reduction in privacy for people who test positive for this system. Once those daily keys are public, you can see which Proximity IDs are associated with a particular ID. (Remember, that’s what the app is supposed to confirm exposure.) While specific applications may limit the information they share and I am sure everyone will do their best, you are now out of the hard way of encryption protection. It is possible to imagine a malicious app or Bluetooth sniffer network that collects proximity IDs in advance, connects them to specific identities and later correlates them with daily keys deleted from the central list. It would be difficult to do this and it would be even more difficult to do it for every person on the list. Even then, everything you would get from the server is the last 14 days of codes. (That’s all that’s relevant to contact tracking, so it’s all central database stores.) But it wouldn’t be impossible at all, which you normally go for cryptography.

In summary, it is difficult to absolutely guarantee someone’s anonymity if they share that they have tested positive through this system. But in defense of the system, this is a difficult guarantee under all circumstances. Under social distance, we all limit our personal contacts, so if you hear that you’ve been exposed on a given day, the list of potential vectors will already be quite short. Add the quarantine and sometimes hospitalization associated with a COVID-19 diagnosis, and it’s very difficult to keep medical privacy completely intact while still warning people who may have been exposed. In some ways, that tradeoff is inherent in contact tracking. Technical systems can only mitigate this.

Additionally, the best contact tracking method we have right now is for people to interview you and ask who you have been in contact with. It is basically impossible to build a completely anonymous contact tracking system.

Could Google, Apple or a hacker use it to find out where I’ve been?

Only below very specific circumstances. If someone collects your proximity IDs and you test positive and decide to share your diagnosis and they run the whole rigamarole described above, they could potentially use it to link you to a specific location where your proximity IDs were spotted in the wild.

But it is important to note that neither Apple nor Google share information that you could put directly on a map. Google has a lot of that information and the company has shared it on an aggregated levelbut it is not part of this system. Google and Apple may know where you already are, but they don’t link that information to this dataset. So while an attacker may be able to work back to that information, they ultimately know less than most apps on your phone.

Can someone use this to find out who I’ve been in contact with?

This would be considerably more difficult. As mentioned above, your phone keeps a log of all proximity IDs it receives, however the spec makes it clear that the log should never leave your phone. As long as your specific log stays on your specific device, it is protected by the same device encryption that protects your texts and emails.

Even if a bad actor stole your phone and was able to breach that security, they would only have the codes you received, and it would be very difficult to figure out who those keys originally were. Without a daily key to work from, they wouldn’t have a clear way to correlate one proximity ID with another, so it would be hard to distinguish a single actor in the mess of Bluetooth trackers, let alone Find out who met whom. And crucially, the robust cryptography makes it impossible to directly derive the associated daily key or associated personal ID number.

What should I do if I don’t want my phone to do this?

Do not install the app and leave the ‘contact tracking’ setting disabled when the operating systems are updated in the summer. Apple and Google insist that participation is voluntary, and unless you take proactive steps to participate in contact tracking, you should be able to use your phone without getting involved.

Is this just a surveillance system in disguise?

This is a tricky question. In a way, contact tracking is supervision. Public health work is under medical supervision simply because it is the only way to find infected people who are not sick enough to see a doctor. It is hoped that, given the catastrophic damage already done by the pandemic, people will be willing to accept this level of surveillance as a temporary measure to counter the further spread of the virus.

A better question is whether this system supervises in a fair or helpful manner. It makes a lot of difference that the system is voluntary and it makes a lot of difference that it does not share more data than necessary. Still, all we have now is the protocol, and it remains to be seen whether governments will attempt to implement this idea in a more invasive or predominant way.

As the protocol is implemented in specific apps, there will be many important decisions about how it is used and how much data is collected outside of it. Governments will make those decisions and they may make them bad – or worse, they may not make them at all. So even if you’re excited about what Apple and Google set out here, they can only throw the ball – and a lot depends on what governments do after they notice.

Comments
Loading...