readthrough updates
This commit is contained in:
parent
49d281eb2f
commit
421c003bfb
@ -150,7 +150,7 @@ This goal was to ensure the Client could use its network interface as if it real
|
||||
|
||||
\subsection{Security}
|
||||
|
||||
Not implemented yet.
|
||||
TODO.
|
||||
|
||||
\subsection{More Bandwidth over Two Equal Connections}
|
||||
|
||||
|
@ -43,13 +43,13 @@ TODO
|
||||
% --------------------------------- UDP ------------------------------------ %
|
||||
\section{UDP}
|
||||
|
||||
To increase the performance of the system, I implemented a UDP method of tunnelling packets, available alongside the TCP method discussed earlier. Using UDP datagrams instead of a TCP flow is a two front approach to increasing performance. Firstly, it removes the issue of head of line blocking, as the protocol does not resend packets when they are not received. Secondly, the datagram design includes less per packet overhead, increasing the efficiency of transmitting packets.
|
||||
To increase the performance of the system, I implemented a UDP method of tunnelling packets, available alongside the TCP method discussed earlier. Using UDP datagrams instead of a TCP flow is a two front approach to increasing performance. Firstly, it removes the issue of head of line blocking, as the protocol does not resend packets when they are not received. Secondly, the datagram design can include less per packet overhead in the form of a header, increasing the efficiency of transmitting packets.
|
||||
|
||||
The goal was to create a UDP packet structure that allows for congestion control (and implicit flow control), without the other benefits that TCP provides. This is as the other features of TCP are unnecessary for this project, due to being covered by protocols above Layer 3, which function regardless of the tunnelling.
|
||||
|
||||
\subsection{Packet Structure}
|
||||
|
||||
The packet structure was decided to allow for effective congestion control and nothing else. This is achieved with a simple 3 part, 12 byte header (shown in figure \ref{fig:udp-packet-structure}). Similarly to TCP, each packet contains an acknowledgement number (ACK) and a sequence number (SEQ). These serve the same purpose as in TCP: providing a method for a congestion controller to know which packets have been received by their partner.
|
||||
The packet structure was decided to allow for effective congestion control and nothing else. This is achieved with a simple 3 part, 12 byte header (shown in figure \ref{fig:udp-packet-structure}). Similarly to TCP, each packet contains an acknowledgement number (ACK) and a sequence number (SEQ). These serve the same purpose as in TCP: providing a method for a congestion controller to know which packets have been received by their partner. However, they are implemented slightly differently. TCP sequence numbers are based on bytes, and as such the sequence number of a packet is the sequence number of the first byte that it contains. As this protocol is designed for transmitting packets, losing part of a packet does not make sense. They will also never be split, as this protocol does not support partial transmission, and as such are atomic. This means that the sequence number can safely represent an individual packet, as opposed to a byte.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
@ -79,9 +79,9 @@ The packet structure was decided to allow for effective congestion control and n
|
||||
|
||||
In addition to these two fields, a further Negative Acknowledgement (NACK) field is required. Due to TCP's promise of reliable transmission, negative acknowledgements can never occur. Either the sender must resend the packet in question, or the flow is terminated. In my protocol, however, it is necessary that the receiver has a method to provide a discontinuous stream of acknowledgements. If this was attempted without a separate NACK number, it would be required that each ACK number is sent and received individually. This decreases the efficiency and correctness of ACKs, both in terms of missing packets, and having to send at least one packet for every packet received.
|
||||
|
||||
The benefit of a NACK is demonstrated in figure \ref{fig:sequence-ack-nack-comparison}. Figure \ref{fig:sequence-ack-continuous} shows a series of ACKs for a perfect set of sequence numbers. This is rather pointless, as there is no point to ACKing packets if you never intend to lose any, but is a situation that can occur for large portions of transport with good congestion control and reliable networking. Figure \ref{fig:sequence-ack-discontinuous} shows the same ACK system for a stream of sequence numbers with one missing. It can be seen that the sender and receiver reach an impasse: the receiver cannot increase its ACK number, as it has not received packet 5, and the sender cannot send more packets, as its window is full. The only move is for the receiver to increase its ACK number and rely on the sender realising that it took too long to acknowledge the missing packet, though this is unreliable at best.
|
||||
The benefit of a NACK is demonstrated in figure \ref{fig:sequence-ack-nack-comparison}. Figure \ref{fig:sequence-ack-continuous} shows a series of ACKs for a perfect set of sequence numbers. This is rather pointless, as there is no point to ACKing packets if you never intend to lose any, but is a situation that can occur for large portions of a flow, given good congestion control and reliable networking. Figure \ref{fig:sequence-ack-discontinuous} shows the same ACK system for a stream of sequence numbers with one missing. It can be seen that the sender and receiver reach an impasse: the receiver cannot increase its ACK number, as it has not received packet 5, and the sender cannot send more packets, as its window is full. The only move is for the receiver to increase its ACK number and rely on the sender realising that it took too long to acknowledge the missing packet, though this is unreliable at best.
|
||||
|
||||
Figure \ref{fig:sequence-ack-nack-discontinuous} shows how this same situation can be responded to with a NACK field. After the receiver has concluded that the intermediate packet(s) were lost in transit (a function of RTT, to be discussed further later), it updates the NACK field to the highest lost packet, allowing the ACK field to be increased from one after the lost packet.This solution resolves the deadlock of not being able to increase the ACK number without requiring reliable delivery.
|
||||
Figure \ref{fig:sequence-ack-nack-discontinuous} shows how this same situation can be responded to with a NACK field. After the receiver has concluded that the intermediate packet(s) were lost in transit (a function of RTT, to be discussed further later), it updates the NACK field to the highest lost packet, allowing the ACK field to be increased from one after the lost packet. This solution resolves the deadlock of not being able to increase the ACK number without requiring reliable delivery.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
@ -144,7 +144,7 @@ In implementing the UDP based protocol, I spent some time reading packet data in
|
||||
|
||||
\subsection{Congestion Control}
|
||||
|
||||
To allow for flexibility in congestion control, I started by building an interface (shown in figure \ref{fig:congestion-control-interface}) for congestion controllers.
|
||||
To allow for flexibility in congestion control, I started by building an interface (shown in figure \ref{fig:congestion-control-interface}) for congestion controllers. The aim of the interface is to provide the controller with every update that could be used for congestion control, while also providing it every opportunity to set an ACK or NACK on a packet.
|
||||
|
||||
\begin{figure}
|
||||
\inputminted{go}{Implementation/Samples/congestion.go}
|
||||
@ -152,24 +152,63 @@ To allow for flexibility in congestion control, I started by building an interfa
|
||||
\label{fig:congestion-control-interface}
|
||||
\end{figure}
|
||||
|
||||
A benefit of the chosen language (Go\footnote{\url{https://golang.org}} is the powerful management of threads of execution, or Goroutines. This is demonstrated in the interface, particularly the method \mintinline{go}{Sequence() uint32}. This method expects a congestion controller to block until it can provide the packet with a sequence number for dispatch. Given that the design runs each producer and consumer in a separate Goroutine, this is an effective way to synchronise the packet sending with the congestion controller, and should be effective for any potential method of congestion control.
|
||||
|
||||
\subsubsection{New Reno}
|
||||
|
||||
The first congestion control protocol I implemented is based on TCP New Reno. It is a well understood and powerful congestion control protocol.
|
||||
The first congestion control protocol I implemented is based on TCP New Reno. It is a well understood and powerful congestion control protocol. The pseudocode for the two most interesting functions are shown in figure \ref{fig:udp-congestion-newreno-pseudocode}.
|
||||
|
||||
\begin{figure}
|
||||
\begin{minted}{c}
|
||||
def findAck(start):
|
||||
ack = start
|
||||
while acksToSend.Min() == ack+1:
|
||||
ack = acksToSend.PopMin()
|
||||
return ack
|
||||
|
||||
def updateAckNack(lastAck, lastNack):
|
||||
nack = lastNack
|
||||
ack = findAck(lastAck, acksToSend)
|
||||
if ack == lastAck:
|
||||
if acksToSend.Min().IsDelayedMoreThan(NackTimeout):
|
||||
nack = acksToSend.Min() - 1
|
||||
ack = findAck(acksToSend.PopMin(), acksToSend)
|
||||
return ack, nack
|
||||
|
||||
def ReceivedNack(nack):
|
||||
if !nack.IsFresh():
|
||||
return
|
||||
windowSize /= 2
|
||||
|
||||
def ReceivedAck(ack):
|
||||
if !ack.IsFresh():
|
||||
return
|
||||
if slowStart:
|
||||
windowSize += numberAcked
|
||||
else:
|
||||
windowCount += numberAcked
|
||||
if windowCount >= windowSize:
|
||||
windowSize += 1
|
||||
windowCount -= windowSize
|
||||
\end{minted}
|
||||
\caption{UDP New Reno pseudocode}
|
||||
\label{fig:udp-congestion-newreno-pseudocode}
|
||||
\end{figure}
|
||||
|
||||
My implementation of New Reno functions differently to the TCP version, given that it responds with NACKs instead of retransmits. In TCP, updating the ACK is similar - the ACK sent is the highest ACK available that remains a continuous stream. The interesting part is visible when the controller decides to send a NACK. Whenever a hole is seen in the packets waiting to be acknowledged, the delay of the minimum packet waiting to be sent is checked. If the packet has been waiting for more than a multiple of the round trip time, chosen presently to be $3*RTT$, the NACK is updated to one below the next packet that can be sent, indicating that a packet has been missed. The ACK can then be incremented from the next available.
|
||||
|
||||
A point of interest is the \mintinline{go}{acksToSend} data structure. It can be seen that three methods are required: \mintinline{go}{Min()}, \mintinline{go}{PopMin()} and \mintinline{go}{Insert()} (in a section of code not shown in the pseudocode). A data structure that implements these methods particularly efficiently is the binary heap, providing Min in $O(1)$ time, with Insert and PopMin in $O(log n)$ time. Therefore, I implemented a binary heap to store the ACKs to send.
|
||||
|
||||
% ------------------------------- Security --------------------------------- %
|
||||
\section{Security}
|
||||
|
||||
For the security implementation, I paid careful attention to the work of Wireguard (Donenfeld, “WireGuard.” \cite{donenfeld_wireguard_2017}). Wireguard is a modern, well respected method of securely transferring Layer 3 packets across the Internet.
|
||||
% For the security implementation, I paid careful attention to the work of Wireguard (Donenfeld, “WireGuard.” \cite{donenfeld_wireguard_2017}). Wireguard is a modern, well respected method of securely transferring Layer 3 packets across the Internet.
|
||||
|
||||
However, as Wireguard is a VPN, it provides certain security benefits that are not within the remit of my threat model (section \ref{section:threat-model}). The primary example of this is privacy. When Wireguard, and most VPNs, send a packet, they first encrypt the contents such that the contents of the datagram are only visible to the intended recipient. For this project, encryption will not be necessary, as that would provide privacy above using the modem without this solution. If a user wishes to also have the benefits of an encrypted Internet connection, the transparency of this solution allows existing VPNs to run underneath and provide that. This follows the philosophy of do one thing and do it well.
|
||||
% However, as Wireguard is a VPN, it provides certain security benefits that are not within the remit of my threat model (section \ref{section:threat-model}). The primary example of this is privacy. When Wireguard, and most VPNs, send a packet, they first encrypt the contents such that the contents of the datagram are only visible to the intended recipient. For this project, encryption will not be necessary, as that would provide privacy above using the modem without this solution. If a user wishes to also have the benefits of an encrypted Internet connection, the transparency of this solution allows existing VPNs to run underneath and provide that. This follows the philosophy of do one thing and do it well.
|
||||
|
||||
The security in this solution will be achieved by using public and private key-pairs to perform a key exchange at the beginning of connections, and then using that key to produce a message authentication code for each packet sent across the connection. To prevent replay of earlier messages, a timestamp will be included within the authenticated section of the message. This timestamp can be used to discard messages sent a certain time earlier than received, reducing the usefulness of replay attacks.
|
||||
% The security in this solution will be achieved by using public and private key-pairs to perform a key exchange at the beginning of connections, and then using that key to produce a message authentication code for each packet sent across the connection. To prevent replay of earlier messages, a timestamp will be included within the authenticated section of the message. This timestamp can be used to discard messages sent a certain time earlier than received, reducing the usefulness of replay attacks.
|
||||
|
||||
As far as is possible, the security of the application relies on external libraries. Although an interesting exercise, implementing security algorithms directly from papers is far more likely to result in errors and thus security flaws. Due to this, I will be using trusted and open source libraries for the scheme I have chosen.
|
||||
|
||||
\subsection{Interface}
|
||||
|
||||
As with congestion control, an interface is provided for MAC Generators and Verifiers to implement. This can be seen in figure \ref{fig:message-authenticator-interface}. As with all interfaces, the goal here was to create a flexible but minimal interface.
|
||||
The security in this solution is achieved by providing a set of interfaces for potential cryptographic systems to implement. This can be seen in figure \ref{fig:message-authenticator-interface}. As with all interfaces, the goal here was to create a flexible but minimal interface.
|
||||
|
||||
\begin{figure}
|
||||
\inputminted{go}{Implementation/Samples/mac.go}
|
||||
@ -177,9 +216,17 @@ As with congestion control, an interface is provided for MAC Generators and Veri
|
||||
\label{fig:message-authenticator-interface}
|
||||
\end{figure}
|
||||
|
||||
\subsection{Shared Key Cryptography}
|
||||
As far as is possible, the security of the application relies on external libraries. Although an interesting exercise, implementing security algorithms directly from papers is far more likely to result in errors and thus security flaws. Due to this, I will be using trusted and open source libraries for the scheme I have chosen.
|
||||
|
||||
\subsection{Symmetric Key Cryptography}
|
||||
|
||||
When providing integrity and authentication for a message, there are two main choices: a Message Authentication Code (MAC) or signing.
|
||||
|
||||
TODO: Finish this section.
|
||||
|
||||
\subsubsection{BLAKE2s}
|
||||
|
||||
The shared key algorithm I chose to implement is BLAKE2s\cite{hutchison_blake2_2013}. It is extremely fast (comparable to MD5) while remaining cryptographically secure.
|
||||
The shared key algorithm I chose to implement is BLAKE2s\cite{hutchison_blake2_2013}. It is extremely fast (comparable to MD5) while remaining cryptographically secure. Further to this, BLAKE2s is available in the Go crypto library\footnote{\url{https://github.com/golang/crypto}}, which is a trusted and open source implementation.
|
||||
|
||||
|
||||
|
||||
|
@ -21,11 +21,11 @@ The second focus is the direct interaction between the Local Portal and the Remo
|
||||
|
||||
These security problems will be considered in the context of the success criteria: provide security no worse than not using this solution at all. That is, the security should be identical or stronger than the threats in the first case, and provide no additional vectors of attack in the second.
|
||||
|
||||
\subsection{Public Packets}
|
||||
\subsection{Transparent Security}
|
||||
|
||||
A convenient factor of the Internet being an interconnected set of smaller networks is that there are very few guarantees of security. At layer 3, none of anonymity, integrity, privacy or freshness are provided once the packet leaves private ranges, so it is up to the application to ensure its own security on top of this lack of guarantees. For the purposes of this software, this is very useful: if there are no guarantees to maintain, applications can be expected to act correctly regardless of how they occur.
|
||||
A convenient factor of the Internet being an interconnected set of smaller networks is that there are very few guarantees of security. At layer 3, none of anonymity, integrity, privacy or freshness are provided once the packet leaves private ranges, so it is up to the application to ensure its own security on top of this lack of guarantees. For the purposes of this software, this is very useful: if there are no guarantees to maintain, applications can be expected to act correctly regardless of how easy it is for these cases to occur.
|
||||
|
||||
Therefore, to maintain the same level of security for applications, this project can simply guarantee that the packets which leave the Remote Portal are the same as those that came in. By doing this, all of the security implemented above Layer 3 will be maintained. This means that whether a user is accessing insecure websites over HTTP, running a corporate VPN connection or sending encrypted emails, the security of these applications will be maintained.
|
||||
Therefore, to maintain the same level of security for applications, this project can simply guarantee that the packets which leave the Remote Portal are the same as those that came in. By doing this, all of the security implemented above Layer 3 will be maintained. This means that whether a user is accessing insecure websites over HTTP, running a corporate VPN connection or sending encrypted emails, the security of these applications will be unaltered.
|
||||
|
||||
\subsection{Portal to Portal Communication}
|
||||
|
||||
@ -33,7 +33,7 @@ Therefore, to maintain the same level of security for applications, this project
|
||||
|
||||
Many Internet connections have caps or cost for additional bandwidth. In a standard network, the control of your cap is physical, in that, if someone wished to increase the load, they would have to physically connect to the modem.
|
||||
|
||||
Due to this, it is important that care is taken with regards to cost. The difference in this case, is that rather than needing physical access to send data through your connection, all one needs is an Internet connection. A conceivable threat is for someone to send packets to your Remote Portal from their own connection, causing the Portal to forward these packets, and thus using your limited or costly bandwidth.
|
||||
Due to this, it is important that care is taken with regards to cost. The difference is that rather than needing physical access to send data through your connection, all one needs is an Internet connection. A conceivable threat is for someone to send packets to your Remote Portal from their own connection, causing the Portal to forward these packets, and thus using your limited or costly bandwidth.
|
||||
|
||||
\subsubsection{Denial of Service}
|
||||
\label{subsubsection:threats-denial-of-service}
|
||||
@ -66,7 +66,7 @@ Due to this, it is important that care is taken with regards to cost. The differ
|
||||
\label{fig:bad-actor-packet-loss}
|
||||
\end{figure}
|
||||
|
||||
If a malicious actor can fool the Remote Portal into sending them a portion of your packets, they are immediately performing an effective Denial of Service. In figure \ref{fig:fast-bad-actor-packet-loss}, it can be seen that a bad actor, with a significantly faster connection than you, can cause huge packet loss if the Remote Portal would accept them as a valid Local Portal connection.
|
||||
If a malicious actor can fool the Remote Portal into sending them a portion of your packets, they are immediately performing an effective Denial of Service on any tunnelled flows relying on loss based congestion control. In figure \ref{fig:fast-bad-actor-packet-loss}, it can be seen that a bad actor, with a significantly faster connection than you, can cause huge packet loss if the Remote Portal would accept them as a valid Local Portal connection.
|
||||
|
||||
\begin{figure}
|
||||
\begin{equation}
|
||||
@ -76,10 +76,9 @@ If a malicious actor can fool the Remote Portal into sending them a portion of y
|
||||
\label{fig:tcp-throughput}
|
||||
\end{figure}
|
||||
|
||||
However, of much more relevance is \ref{fig:slow-bad-actor-packet-loss}. Given the TCP throughput equation, shown in figure \ref{fig:tcp-throughput}, there is an inverse relation between packet loss and throughput of any TCP connections. Assuming a Round Trip Time of $20ms$ and Maximum Segment Size of $1460$, packet loss of $25\%$ limits the maximum TCP throughput to approximately $1.17Mbps$. In fact, due to this relation, a packet loss of even $1\%$ leads to a maximum throughput of approximately $5.84Mbps$. This means that even a small packet loss has a drastic effect on the performance of the connection as a whole, and thus makes Remote Portals an effective target for Denial of Service attacks. Thus care should be taken that all Local Portal connections are from the subject that is intended.
|
||||
However, of much more relevance is \ref{fig:slow-bad-actor-packet-loss}. Given the TCP throughput equation, shown in figure \ref{fig:tcp-throughput}, there is an inverse relation between packet loss and throughput of any TCP connections. Assuming a Round Trip Time of $20ms$ and Maximum Segment Size of $1460$, packet loss of $25\%$ limits the maximum TCP throughput to approximately $1.17Mbps$. In fact, due to this relation, a packet loss of even $1\%$ leads to a maximum throughput of approximately $5.84Mbps$. This means that even a small packet loss can have a drastic effect on the performance of the connection as a whole, and thus makes Remote Portals an effective target for Denial of Service attacks. Care must be taken that all Local Portal connections are from the intended subject.
|
||||
|
||||
\subsection{Privacy}
|
||||
|
||||
Though the packets leaving a modem have no reasonable expectation of privacy, having the packets enter the Internet at two points does increase this vector. For example, if a malicious actor convinces the Remote Portal that they are a valid connection from the Local Portal, a portion of packets will be sent to them. However, as a fortunate side effect, this method to attempt sniffing would cause a significant Denial of Service to any congestion controlled links based on packet loss, due to the amount of packet loss caused. Thus, as long as it is ensured that each packet is not sent to multiple places, privacy should be maintained at a similar level to simple Internet access.
|
||||
|
||||
Though the packets leaving a modem have no reasonable expectation of privacy, having the packets enter the Internet at two points does increase this vector. For example, if a malicious actor convinces the Remote Portal that they are a valid connection from the Local Portal, a portion of packets will be sent to them. However, as a fortunate side effect, this method to attempt sniffing would cause a significant Denial of Service to any congestion controlled links based on packet loss, due to the amount of packet loss caused. Therefore, as long as it is ensured that each packet is not sent to multiple places, privacy should be maintained at a similar level to simple Internet access.
|
||||
|
||||
|
BIN
thesis.pdf
BIN
thesis.pdf
Binary file not shown.
Loading…
Reference in New Issue
Block a user