ResearchPod

How can we ensure private communication?

ResearchPod

We increasingly rely on electronic communications across society. You may have heard privacy protection methods for those messages, such as end-to-end encryption. However, can that protection be guaranteed against governments, industries or bad actors?

How can we ensure privacy, but at the same time have the means to enforce laws and prevent malicious behaviour, and how do we develop cryptography law?

Professor Moti Yung and colleagues at the Privacy, Security, and Safety Research Group at Google LLC and at Columbia University, USA, have conceptualised ‘anamorphic’ cryptography,  and have also been recognised for their contributions to the so-called ‘Crypto Wars’ debate,

Read the original article: https://doi.org/10.1007/978-3-031-07085-3_2

Read more in Research Outreach

Hello and welcome to Research Pod! Thank you for listening and joining us today.  


 

In this episode, we look at the research of Professor Moti Yung and his research partners who investigate cryptosystems. The team have also been recognised for theircontributions to the so-called ‘Crypto Wars’ debate, that weighs up the management and regulation of accessing encryption keys. Yung and colleagues at the Privacy, Security, and Safety Research Group at Google LLC and at Columbia University, USA, have conceptualised ‘anamorphic’ cryptography so that even if the keys are known to an adversary, pre-existing cryptographic systems can nevertheless directly transfer secure messages.  

 

In our society, we increasingly rely on electronic forms of communication and have heard about the methods put in place to protect our privacy, such as end-to-end encryption in messaging apps. Typically, if you want to send your friend a message, each of you has a public key and the corresponding private key. Anyone can have access to your public key, but your private key is unique to you. So, you can choose to write a message and encrypt it using your friend’s public key. Then, only your friend can decrypt it using their private key, meaning that they are the only one with the means to read your message. But what happens if an external party has access to your friend’s private key, or what if you are not free to choose what message to send? The privacy between you and your friend is then no longer upheld.  

 

The privacy guaranteed by encryption relies on two assumptions: the assumption that you are free to choose and encrypt your message is called the ‘sender-freedom assumption’, and the assumption that your friend is the only person with the means to access the message is called the ‘receiverprivacy assumption’. These assumptions are the default case. However, they are sometimes impinged upon by governments, to various extents. External parties, dubbed ‘dictators’, may have legal means to gain access to private keys, or may be able to force people to send incorrect messages of their choice. The most extreme cases are often found in dictatorships, but increasingly, governments are asking for some knowledge of keys to identify national threats. This can create and indeed, has already created tension between industry or privacy advocates and governments, known as Crypto Wars. 

 

 

The same questions have arisen throughout the development of cryptography: how can we ensure privacy, but at the same time have the means to enforce laws and prevent malicious behaviour, and how do we develop cryptography law? One solution was the Clipper chip proposal in the early 1990s, where the US government proposed that a strong cryptographic system needed to keep a copy of the keys required to decrypt the messages and make it available to a trusted third party encrypting it on every ciphertext message. Passing the ciphertext to its receiver was predicated upon this very action being authenticated. If there was a legal requirement to access the messages, the key would legally have to be recovered, from any ciphertext message, and handed over by the third party in a key disclosure law. This configuration is known as a key escrow system. 

 

However, this proposal was flawed in a number of ways – the additional keys could be used to violate privacy or used by law enforcement for surveillance, which introduced a lot of trust elements to the cryptosystem. In particular, Yung found at the time that the authentication above did not bind the ciphertext to point at the right key to be opened. Since then, cryptographers have been working to construct a means by which a fair system can be enacted. One example is a system with three parties – a user, who sends encrypted messages, a law enforcement body which may request access to messages, and an independent adjudicator who arbitrates if a request from the law enforcement body is fair.  

 

But what happens if the dictator – rather than a law-abiding government – acts in this system? They can act both as law enforcement and as adjudicators, giving themselves the power to overcome the system in place and gain access to encrypted messages. Likewise, with a key escrow system, they can force third parties to reveal the private keys. So, how can we send encrypted messages without a dictator accessing them in our pre-existing cryptographic systems? 

 

 

As we have seen in the previous example, if the dictator has the private key required to read the message, we cannot get around their requests. This has led Yung and his research partners to think more about the keys in use. They propose a second key that the dictator has no knowledge of. So, their system has two modes – a regular case, and the researchers’ new anamorphic case. In the regular case, Alice encrypts her message to Bob using his public key. Bob can then decrypt it using his private key – just like the example between you and your friend we considered earlier. However, if the dictator can get hold of the encrypted message, they can force Bob to give them the secret key, thus gaining access to the message. 

 

However, in the anamorphic case, Alice uses an anamorphic public key to encrypt her message. The anamorphic key is associated with two private keys – a regular private key, like that in the regular case discussed above, and an additional secret anamorphic private key. When Alice uses Bob’s anamorphic public key to encrypt her message, she generates a ciphertext that has two messages – let’s call them message 1 and message 2. If the regular private key is applied to the ciphertext, we reveal message 1, and if the secret private key is applied, we reveal message 2. So, if Bob is forced to hand over the private key to the dictator, they can hand over the regular private key and reveal message 1. Only the intended recipient – in this case, Bob – has access to the secret private key, and to message 2. This relies on the ciphertext in the anamorphic case being effectively identical to that produced in the regular case, and that a pair of anamorphic public and private keys are indistinguishable from a regular pair. This means the dictator does not know that there is a second message and the secret private key, and the private message, message 2, can be securely sent between Alice and Bob. 

 

 

Yung and his research partners highlight the need for anamorphic encryption to work within existing systems as we need the ciphertexts for both a regular and anamorphic case to look the same, so that they don’t arouse suspicion in the dictator. We can’t start adding additional strings to ciphertexts as this is unappealing and creates additional work for the user in the normal case, who doesn’t have any interest in keeping the second message secret. Instead, they highlight the importance of incorporating this within systems that already have a second channel. Then the second channel will not create suspicion, but can be used as a covert channel for the anamorphic encryption, without any detriment to normal users of the system.  

 

The researchers highlight how anamorphic encryption can apply to a variety of systems – for example, what if we remove the sender-freedom assumption? Say that Alice is in a position where she may be forced to send a fake message. She could privately set up a shared anamorphic key system with Bob – so if she sent him a fake message, he could reproduce the ciphertext that carries the fake message and a set of coin tosses which are used to create the ciphertext. If Bob decrypts the coin tosses with the shared key, he can receive the private message that Alice wished to send him. While this does require the setting up of the shared key in advance, it highlights how the anamorphic protocol can be adapted to account for limitations on the sender and the recipient, all while overcoming the impositions of the dictator.  

 

 

Overall, Yung and his research partners highlight how dictators could previously enforce/ use a number of key escrow systems. They conceptualise anamorphic encryption systems, using both the regular channel for users who are not concerned about their messages being accessed, and an anamorphic channel with an additional secret private key. This allows for both a regular and a secret anamorphic message to be sent, and for a regular key to be turned over to the dictator if necessary – for example, via an escrow process – without revealing the second secret message. This holds the potential to overcome the Crypto Wars dilemma and demonstrate its futility: the dictators or governments having the keys to strongly encrypted information, only allowing dictators to access message 1, but still offering privacy in our communications for the future on the anamorphic channel message, or message 2. 


 

That’s all for this episode – thanks for listening, and stay subscribed to Research Pod for more of the latest science.  

 

See you again soon.