diff --git a/0_Proforma/proforma.tex b/0_Proforma/proforma.tex index d2a8958..9bb0c52 100644 --- a/0_Proforma/proforma.tex +++ b/0_Proforma/proforma.tex @@ -6,7 +6,7 @@ Candidate Number: & 2373A \\ Project Title: & A Multi-Path Bidirectional Layer 3 Proxy \\ Examination: & Computer Science Tripos - Part II, 2021 \\ - Word Count: & 12057 \\ + Word Count: & 11894 \\ Line Count: & 3564\footnotemark \\ Project Originator: & The dissertation author \\ Supervisor: & Michael Dodson diff --git a/2_Preparation/preparation.tex b/2_Preparation/preparation.tex index 0f55d94..1e5f9a8 100644 --- a/2_Preparation/preparation.tex +++ b/2_Preparation/preparation.tex @@ -118,7 +118,7 @@ The negatives of using C++ are demonstrated in the sample script, given in Figur Rust is memory safe and thread safe, solving the latter issues with C++. Rust also has a minimal runtime, allowing for an execution speed comparable to C or C++. The Rust sample is given in Figure \ref{fig:rust-tun-sample}, and is pleasantly concise. -For the purposes of this project, Rust's youthfulness is a negative. This is two-faceted: Integrated Development Environment (IDE) support and crate stability (crates are the Rust mechanism for package management). Firstly, the IDE support for Rust in my IDEs of choice is provided via a plugin to IntelliJ, and is not as well supported as the other languages. Secondly, the crate available for TUN support (tun-tap\footnote{\url{https://docs.rs/tun-tap/}}) does not yet provide a stable Application Programming Interface (API), which was noticed during test program development. Between writing the program initially and re-testing when documenting it, the crate API had changed to the point where my script no longer type checked. Further, the old version had disappeared, and thus I was left with a program that didn't compile or function. Although I could write the API for TUN interaction myself, the safety benefits of Rust would be less pronounced, as the direct systems interaction requires \texttt{unsafe} code, which bypasses parts of the type-checker and borrow-checker, leading to an increased potential for bugs. +For the purposes of this project, Rust's youthfulness is a negative. This is two-faceted: Integrated Development Environment (IDE) support and crate stability (crates are the Rust mechanism for package management). Firstly, the IDE support for Rust in my IDEs of choice is provided via a plugin to IntelliJ, and is not as well supported as the other languages. Secondly, the crate available for TUN support (tun-tap\footnote{\url{https://docs.rs/tun-tap/}}) does not yet provide a stable Application Programming Interface (API), which was noticed during test program development. Between writing the program initially and re-testing when documenting it, the crate API had changed to the point where my script no longer type checked. Further, the old version had disappeared, and thus I was left with a program that did not compile or function. Although I could write the API for TUN interaction myself, the safety benefits of Rust would be less pronounced, as the direct systems interaction requires \texttt{unsafe} code, which bypasses parts of the type-checker and borrow-checker, leading to an increased potential for bugs. \subsubsection{Go} diff --git a/3_Implementation/implementation.tex b/3_Implementation/implementation.tex index e4459b0..a029ffb 100644 --- a/3_Implementation/implementation.tex +++ b/3_Implementation/implementation.tex @@ -14,7 +14,7 @@ Implementation of the proxy is in two parts: software that provides a multipath layer 3 tunnel between two hosts, and the system configuration necessary to utilise this tunnel as a proxy. An overview of the software and system is presented in Figure \ref{fig:dataflow-overview}. -This chapter will detail this implementation in three sections. The software will be described in sections \ref{section:implementation-packet-transport} and \ref{section:implementation-software-structure}. Section \ref{section:implementation-packet-transport} details the implementation of both TCP and UDP methods of transporting the tunnelled packets between the hosts. Section \ref{section:implementation-software-structure} explains the software's structure and dataflow. The system configuration will be described in section \ref{section:implementation-system-configuration}, along with a discussion of some of the oddities of multipath routing, such that a reader would have enough knowledge to implement the proxy given the software. Figure \ref{fig:dataflow-overview} shows the path of packets within the proxy. As each section discusses an element of the program, where it fits within this diagram is detailed. +This chapter details this implementation in three sections. The software will be described in Sections \ref{section:implementation-packet-transport} and \ref{section:implementation-software-structure}. Section \ref{section:implementation-packet-transport} details the implementation of both TCP and UDP methods of transporting the tunnelled packets between the hosts. Section \ref{section:implementation-software-structure} explains the software's structure and dataflow. The system configuration will be described in Section \ref{section:implementation-system-configuration}. Figure \ref{fig:dataflow-overview} shows the path of packets within the proxy, and it will be referenced throughout these sections. \begin{sidewaysfigure} \includegraphics[width=\textheight]{overview.png} @@ -28,18 +28,18 @@ This chapter will detail this implementation in three sections. The software wil \section{Packet Transport} \label{section:implementation-packet-transport} -As shown in Figure \ref{fig:dataflow-overview}, the interfaces through which transport for packets is provided between the two hosts are producers and consumers. A transport pair is between a consumer on one proxy and a producer on the other, where packets enter the consumer and exit the corresponding producer. Two methods for producers and consumers are implemented: TCP and UDP. As the greedy load balancing of this proxy relies on congestion control, TCP provided a base for a proof of concept, while UDP expands on this proof of concept to remove unnecessary overhead and improve performance in the case of TCP-over-TCP tunnelling. This section discusses, in section \ref{section:implementation-tcp}, the method of transporting discrete packets across the continuous byte stream of a TCP flow, before describing why this solution is not ideal. Then, in section \ref{section:implementation-udp}, it goes on to discuss adding congestion control to UDP datagrams, while avoiding ever retransmitting a proxied packet. +As shown in Figure \ref{fig:dataflow-overview}, the interfaces through which transport for packets is provided between the two hosts are producers and consumers. A transport pair is between a consumer on one proxy and a producer on the other, where packets enter the consumer and exit the corresponding producer. Two methods for producers and consumers are implemented: TCP and UDP. As the greedy load balancing of this proxy relies on congestion control, TCP provided an initial proof-of-concept, while UDP expands on this proof-of-concept to remove unnecessary overhead and improve performance in the case of TCP-over-TCP tunnelling. Section \ref{section:implementation-tcp} discusses the method of transporting discrete packets across the continuous byte stream of a TCP flow, before describing why this solution is not ideal. Then, Section \ref{section:implementation-udp} goes on to discuss adding congestion control to UDP datagrams, while avoiding retransmitting a proxied packet. \subsection{TCP} \label{section:implementation-tcp} The requirements for greedy load balancing to function are simple: flow control and congestion control. TCP provides both of these, so was an obvious initial solution. However, TCP also provides unnecessary overhead, which will go on to be discussed further. -A TCP flow cannot be connected directly to a TUN adapter, as the TUN adapter accepts and outputs discrete and formatted IP packets while the TCP connection sends a stream of bytes. To resolve this, each packet sent across a TCP flow is prefixed with the length of the packet. When a TCP consumer is given a packet to send, it first sends the 32-bit length of the packet across the TCP flow, before sending the packet itself. The corresponding TCP producer then reads these 4 bytes from the TCP flow, before reading the number of bytes specified by the received number. This enables punctuation of the stream-oriented TCP flow into a packet-carrying connection. +A TCP flow cannot be connected directly to a TUN adaptor, as the TUN adaptor accepts and outputs discrete and formatted IP packets while the TCP connection sends a stream of bytes. To resolve this, each packet sent across a TCP flow is prefixed with the length of the packet. When a TCP consumer is given a packet to send, it first sends the 32-bit length of the packet across the TCP flow, before sending the packet itself. The corresponding TCP producer then reads these 4 bytes from the TCP flow, before reading the number of bytes specified by the received number. This enables punctuation of the stream-oriented TCP flow into a packet-carrying connection. -However, using TCP to tunnel TCP packets (known as TCP-over-TCP) can cause a degradation in performance in non-ideal circumstances \citep{honda_understanding_2005}. Further, using TCP to tunnel IP packets provides a superset of the required guarantees, in that reliable delivery and ordering are guaranteed. Reliable delivery can cause a decrease in performance for tunnelled flows which do not require reliable delivery, such as a live video stream - a live stream does not wish to wait for a packet to be redelivered from a portion that is already played, and thus will spend longer buffering than if it received the up to date packets instead. Ordering can limit performance when tunnelling multiple streams, as a packet for a phone call could already be received, but instead has to wait in a buffer for a packet for a download to arrive, increasing latency unnecessarily. +However, using TCP to tunnel TCP packets (TCP-over-TCP) can cause a degradation in performance \citep{honda_understanding_2005}. Further, using TCP to tunnel IP packets provides a superset of the required guarantees, in that reliable delivery and ordering are guaranteed. Reliable delivery can cause a decrease in performance for tunnelled flows which may not require reliable delivery, such as a live video stream. Ordering can limit performance when tunnelling multiple streams, as a packet for a phone call could already be received, but instead has to wait in a buffer for a packet for an unrelated download to arrive. -Although the TCP implementation provides an excellent proof of concept and basic implementation, work moved to a second UDP implementation, aiming to solve some of these problems. However, the TCP implementation is functionally correct, so is left as an option, furthering the idea of flexibility maintained throughout this project. In cases where a connection that suffers particularly high packet loss is combined with one which is more stable, TCP could be employed on the high loss connection to limit overall packet loss. The effectiveness of such a solution would be implementation specific, so is left for the architect to decide. +Although the TCP implementation provides an excellent proof-of-concept, work moved to a second UDP implementation, aiming to solve some of these problems. However, the TCP implementation is functionally correct; in cases where a connection that suffers particularly high packet loss is combined with one which is more stable, TCP could be employed on the high loss connection to limit overall packet loss. The effectiveness of such a solution would be implementation specific, so is left for the architect to decide. % --------------------------------- UDP ------------------------------------ % \subsection{UDP} @@ -152,12 +152,12 @@ Congestion control is one of the main point for tests in the repository. The New \section{Software Structure} \label{section:implementation-software-structure} -This section details the design decisions behind the application structure, and how it fits into the systems where it will be used. Much of the focus is on the flexiblity of the interfaces to future additions, while also describing the concrete implementations available with the software as of this work. +This section details the design decisions behind the application structure, and how it fits into the systems where it will be used. Much of the focus is on the flexibility of the interfaces to future additions, while also describing the concrete implementations available with the software as of this work. % ---------------------- Running the Application --------------------------- % \subsection{Running the Application} -Initially, the application suffered from a significant race condition when starting. The application followed a standard flow, where it created a TUN adapter to receive IP packets and then began proxying the packets from/to it. However, when running the application, no notification was received when this TUN adapter became available. As such, any configuration completed on the TUN adapter was racing with the TUN adapter's creation, resulting in many start failures. +Initially, the application suffered from a significant race condition when starting. The application followed a standard flow, where it created a TUN adaptor to receive IP packets and then began proxying the packets from/to it. However, when running the application, no notification was received when this TUN adaptor became available. As such, any configuration completed on the TUN adaptor was racing with the TUN adaptor's creation, resulting in many start failures. The software now runs in much the same way as other daemons you would launch, leading to a similar experience as other applications. The primary inspiration for the functionality of the application is Wireguard \citep{donenfeld_wireguard_2017}, specifically \verb'wireguard-go'\footnote{\url{https://github.com/WireGuard/wireguard-go}}. To launch the application, the following shell command is used: @@ -178,9 +178,9 @@ proxy = new_proxy(c, t) proxy.run() \end{minted} -Firstly, the application validates the configuration, allowing an early exit if misconfigured. Then the TUN adapter is created. This TUN adapter and the configuration are handed to a duplicate of the process, which sees them and begins running the given proxy. This allows the parent process to exit, while the background process continues running as a daemon. +Firstly, the application validates the configuration, allowing an early exit if misconfigured. Then the TUN adaptor is created. This TUN adaptor and the configuration are handed to a duplicate of the process, which sees them and begins running the given proxy. This allows the parent process to exit, while the background process continues running as a daemon. -By exiting cleanly and running the proxy in the background, the race condition is avoided. The exit is a notice to the launcher that the TUN adapter is up and ready, allowing for further configuration steps to occur. Otherwise, an implementation specific signal would be necessary to allow the launcher of the application to move on, which conflicts with the requirement of easy future platform compatibility. +By exiting cleanly and running the proxy in the background, the race condition is avoided. The exit is a notice to the launcher that the TUN adaptor is up and ready, allowing for further configuration steps to occur. Otherwise, an implementation specific signal would be necessary to allow the launcher of the application to move on, which conflicts with the requirement of easy future platform compatibility. % ------------------------------ Security ---------------------------------- % \subsection{Security} @@ -250,7 +250,7 @@ A directory tree of the repository is provided in Figure \ref{fig:repository-str .3 replay\DTcomment{Replay protection}. .3 shared\DTcomment{Shared errors}. .3 tcp\DTcomment{TCP flow transport}. - .3 tun\DTcomment{TUN adapter}. + .3 tun\DTcomment{TUN adaptor}. .3 udp\DTcomment{UDP datagram transport}. .4 congestion\DTcomment{Congestion control methods}. .3 .drone.yml\DTcomment{CI specification}. @@ -270,7 +270,7 @@ A directory tree of the repository is provided in Figure \ref{fig:repository-str The software portion of this proxy is entirely symmetric, as can be seen in Figure \ref{fig:dataflow-overview}. However, the system configuration diverges, as each side of the proxy serves a different role. Referring to Figure \ref{fig:dataflow-overview}, it can be seen that the kernel routing differs between the two nodes. Throughout, these two sides have been referred to as the local proxy and the remote proxy, with the local in the top left and the remote in the bottom right. -As the software portion of this application is implemented in user-space, it has no control over the routing of packets. Instead, a virtual interface is provided, and the kernel is instructed to route relevant packets to/from this interface. In sections \ref{section:implementation-remote-proxy-routing} and \ref{section:implementation-local-proxy-routing}, the configuration for routing the packets for the remote proxy and local proxy respectively are explained. Finally, in section \ref{section:implementation-multi-interface-routing}, some potentially unexpected behaviour of using devices with multiple interfaces is discussed, such that the reader can avoid some of these pitfalls. Throughout this section, examples will be given for both Linux and FreeBSD. Though these examples are provided, they are one of many methods of achieving the same results. +As the software portion of this application is implemented in user-space, it has no control over the routing of packets. Instead, a virtual interface is provided, and the kernel is instructed to route relevant packets to/from this interface. In sections \ref{section:implementation-remote-proxy-routing} and \ref{section:implementation-local-proxy-routing}, the configuration for routing the packets for the remote proxy and local proxy respectively are explained. Finally, in Section \ref{section:implementation-multi-interface-routing}, some potentially unexpected behaviour of using devices with multiple interfaces is discussed, such that the reader can avoid some of these pitfalls. Throughout this section, examples will be given for both Linux and FreeBSD. Though these examples are provided, they are one of many methods of achieving the same results. \subsection{Remote Proxy Routing} \label{section:implementation-remote-proxy-routing} diff --git a/4_Evaluation/evaluation.tex b/4_Evaluation/evaluation.tex index ebc87f0..7075592 100644 --- a/4_Evaluation/evaluation.tex +++ b/4_Evaluation/evaluation.tex @@ -11,9 +11,7 @@ \graphicspath{{4_Evaluation/Figs/Vector/}{4_Evaluation/Figs/}} \fi -This chapter will discuss the methods used to evaluate my project and the results obtained. The results will be discussed in the context of the success criteria laid out in the Project Proposal (Appendix \ref{appendix:project-proposal}). This evaluation shows that a network using my method of combining Internet connections can see vastly superior network performance to one without. It will show the benefits to throughput, availability, and adaptability. - -The tests are performed on a Dell R710 Server with the following specifications: +This chapter will discuss the methods used to evaluate my project and the results obtained. The results will be discussed in the context of the success criteria laid out in the Project Proposal (Appendix \ref{appendix:project-proposal}). This evaluation shows that a network using my method of combining Internet connections can see vastly superior network performance to one without. It will show the benefits to throughput, availability, and adaptability. The tests are performed on a Dell R710 Server with the following specifications: \vspace{5mm} \begin{tabular}{ll} @@ -22,13 +20,14 @@ The tests are performed on a Dell R710 Server with the following specifications: \textbf{Kernel} & Linux 5.4 LTS \end{tabular} +\vspace{5mm} When presenting data, error bars are given of the Inter-Quartile Range (IQR) of the data, with the plotted point being the median. \section{Success Criteria} \subsection{Flow Maintained} -The results for whether a flow can be maintained during a single connection loss are achieved using an iperf3 UDP test. The UDP test runs at a fixed bitrate, and measures the quantity of datagrams lost in the transfer. Three tests will be performed on a proxy with two connections: both connections remain up, one connection remains up, and both connections are lost. To satisfy this success criteria, the single connection lost may have a small amount of loss, while losing both connections should terminate the test. +The results for whether a flow can be maintained during a single connection loss are achieved using an iperf3 UDP test. The UDP test runs at a fixed bitrate, and measures the quantity of datagrams lost in transit. Three tests will be performed on a proxy with two connections: both connections remain up, one connection remains up, and both connections are lost. To satisfy this success criteria, the single connection lost may have a small amount of loss, while losing both connections should terminate the test. \begin{figure} \begin{Verbatim}[fontsize=\small] @@ -71,7 +70,7 @@ Connecting to host X.X.X.X, port 5201 \begin{figure} \begin{Verbatim}[fontsize=\small] Connecting to host X.X.X.X, port 5201 -[ 5] local 1.1.1.1 port 51581 connected to 1.1.1.2 port 5201 +[ 5] local X.X.X.Y port 51581 connected to X.X.X.X port 5201 [ ID] Interval Transfer Bitrate Total Datagrams [ 5] 0.00-1.00 sec 129 KBytes 1.05 Mbits/sec 91 [ 5] 1.00-2.00 sec 127 KBytes 1.04 Mbits/sec 90 @@ -82,13 +81,13 @@ Connecting to host X.X.X.X, port 5201 \label{fig:maintained-both-connections-down} \end{figure} -These results are given in figures \ref{fig:maintained-both-connections-alive}, \ref{fig:maintained-one-connections-down} and \ref{fig:maintained-both-connections-down} respectively. The results are as expected: no connection loss handles the 1MBps stream with no problems, and therefore no packets are lost, one connection loss causes slight packet loss ($0.88\%$) but the test is able to continue, and a complete connection loss stalls the test. Given the consistent external IP, this shows that a flow can be maintained through a single connection loss, with only a small loss of packets. This level of packet loss represents some loss on a phone call that lasts approximately 45ms, after which the call continues gracefully. This satisfies the success criteria. +These results are given in figures \ref{fig:maintained-both-connections-alive}, \ref{fig:maintained-one-connections-down} and \ref{fig:maintained-both-connections-down} respectively. The results are as expected: no connection loss handles the 1MB/s stream with no problems, and therefore no packets are lost, one connection loss causes slight packet loss ($0.88\%$) but the test is able to continue, and a complete connection loss stalls the test. Given the consistent external IP, this shows that a flow can be maintained through a single connection loss, with only a small loss of packets. This level of packet loss represents some loss on a phone call that lasts approximately 45ms, after which the call continues gracefully. This satisfies the success criteria. \subsection{Bidirectional Performance Gains} -To demonstrate that all performance gains are bidirectional, I will provide graphs both inbound and outbound to the client for each performance test executed in this evaluation. This will sufficiently show the performance gains in each case. Inbound tests occur with the test server running on the proxy client and the test client running outside, while outbound tests occur with the test server running outside of the proxy and reaching in. +To demonstrate that all performance gains are bidirectional, I will provide graphs both inbound and outbound to the client for each performance test of the core success criteria. This will sufficiently show the performance gains in each case. Inbound tests occur with the test server running on the proxy client and the test client running outside, while outbound tests place the test server outside and the test client on the proxy client. -To demonstrate this somewhat succinctly, a pair of graphs for the same test in a common case will be shown. To demonstrate that this requirement is satisfied for all cases, for each graph of results presented for the basic success criteria, the graph for the alternative direction will be provided in appendix \ref{appendix:outbound-graphs}. +To demonstrate this somewhat succinctly, the same test will be executed both inbound and outbound, with each plotted as a series on a graph. To demonstrate that this requirement is satisfied for all cases, for each graph of results presented for the basic success criteria, the graph for the alternative direction will be provided in appendix \ref{appendix:outbound-graphs}. \begin{figure} \centering @@ -97,7 +96,7 @@ To demonstrate this somewhat succinctly, a pair of graphs for the same test in a \label{fig:bidirectional-comparison} \end{figure} -Figure \ref{fig:bidirectional-comparison} has two series for the same set of tests - one for the inbound (reaching in to the client, or download) performance and one for the outbound (the client reaching out, or upload) performance. The graphs align neatly comparatively, however, there is a slight preference to outbound flows. This is due to the outbound flows being spread between interfaces, which avoids waiting for the OS to finish locking an interface quite as often. In each case, both inbound and outbound performance satisfy the success criteria, so this is satisfied. +Figure \ref{fig:bidirectional-comparison} has two series for the same set of tests - one for the inbound (reaching in to the client, or download) performance and one for the outbound (the client reaching out, or upload) performance. The trend is consistent within a direction, however, there is a slight preference to outbound flows. This is due to the outbound flows being spread between interfaces, which avoids waiting for the kernel to finish locking an interface quite as often. In each case, both inbound and outbound performance satisfy the success criteria, so this is satisfied. \subsection{IP Spoofing} \label{section:ip-spoofing-evaluation} @@ -125,15 +124,15 @@ Demonstrating that the IP of the client can be set to the IP of the remote proxy \label{fig:standard-network-structure-ips} \end{figure} -It is shown that the client in this testing setup shares an IP address with the remote proxy. The details of this configuration are provided in Figure \ref{section:implementation-system-configuration}. This satisfies the success criteria. +It is shown that the client in this testing setup shares an IP address with the remote proxy. The details of this configuration are provided in Section \ref{section:implementation-system-configuration}. This satisfies the success criteria. \subsection{Security} -Success for security involves providing security no worse than a standard connection. This is achieved by using Message Authentication Codes, Replay Protection and extra data for connection authentication, described in detail in Section \ref{section:preparation-security}. Further, Section \ref{section:layered-security} provides an argument that the proxying of packets is made secure by operating in a secure overlay network, such as a VPN. This ensures that security can be maintained, regardless of changes in the security landscape, by composing my proxy with additional security software. +Success for security involves providing security no worse than a standard connection. This is achieved by using Message Authentication Codes, Replay Protection and extra authenticated information for connection authentication, described in detail in Section \ref{section:preparation-security}. Further, Section \ref{section:layered-security} provides an argument that the proxying of packets can be made secure by operating in a secure overlay network, such as a VPN. This ensures that security can be maintained, regardless of changes in the security landscape, by composing my proxy with additional security software. \subsection{More Bandwidth over Two Equal Connections} -To demonstrate that more bandwidth is available over two equal connections through this proxy than one without, I will compare the iperf3 throughput between the two cases. Further, I will provide a comparison point against a single connection of the higher bandwidth, as this is the maximum theoretical performance of combining the two lower bandwidth connections. +To demonstrate that more bandwidth is available over two equal connections through this proxy than one without, I will compare the iperf3 throughput between the two cases. Further, I will provide a comparison point against a single connection of the combined bandwidth, as this is the maximum theoretical performance of combining the two lower bandwidth connections. \begin{figure} \centering @@ -151,7 +150,7 @@ To demonstrate that more bandwidth is available over two equal connections throu \label{fig:more-bandwidth-equal} \end{figure} -The results of these tests are given in Figure \ref{fig:more-bandwidth-equal}, for both a pair of 1MBps connections and a pair of 2MBps connections. To satisfy this success criteria, the proxied bar on each graph should exceed the throughput of the direct bar of equal bandwidth. It can be seen in both cases that this occurs, and thus the success criteria is met. The throughput far exceeds the single direct connection, and is closer to the single double bandwidth connection than the single equal bandwidth connection, demonstrating a good portion of the maximum performance is achieved ($92.5\%$ for the 1+1MB/s proxy, and $88.5\%$ for the 2+2MB/s proxy). +The results of these tests are given in Figure \ref{fig:more-bandwidth-equal}, for both a pair of 1MB/s connections and a pair of 2MB/s connections. To satisfy this success criteria, the proxied bar on each graph should exceed the throughput of the direct bar of equal bandwidth. It can be seen in both cases that this occurs, and thus the success criteria is met. The throughput far exceeds the single direct connection, and is closer to the single double bandwidth connection than the single equal bandwidth connection, demonstrating a good portion of the maximum performance is achieved ($92.5\%$ for the 1+1MB/s proxy, and $88.5\%$ for the 2+2MB/s proxy). \section{Extended Goals} @@ -175,7 +174,7 @@ For showing improved throughput over connections which are not equal, three resu \label{fig:more-bandwidth-unequal} \end{figure} -Two sets of results are provided - one for 1MBps and 2MBps connections, and another for 2MBps and 4MBps connections. In both cases, it can be seen that the proxy with unequal connections lies between the equal connection proxies. Further, it can be seen that both unequal proxied connections lie approximately halfway between the equal pairs. This suggests that the proxy design is successful in being invariant to the static balance of connection throughput. +Two sets of results are provided - one for 1MB/s and 2MB/s connections, and another for 2MB/s and 4MB/s connections. In both cases, it can be seen that the proxy with unequal connections lies between the equal connection proxies. Further, it can be seen that both unequal proxied connections lie approximately halfway between the equal pairs ($74.4\%$ of the maximum for 1+2MB/s, and $75.1\%$ of the maximum for 2+4MB/s). This suggests that the proxy design is successful in being invariant to the static balance of connection throughput. \subsection{More Bandwidth over Four Equal Connections} @@ -188,13 +187,13 @@ This criteria expands on the scalability in terms of number of connections of th \label{fig:more-bandwidth-four} \end{figure} -Provided in Figure \ref{fig:more-bandwidth-four} are results for each of 2, 3 and 4 2MBps connections. Firstly, it is clear that the proxy consisting of 4 connections exceeds the throughput of the proxy consisting of 3 connections. Secondly, it appears that a linear trend is forming. This trends will be further evaluated in Section \ref{section:performance-evaluation}, but suggests that the structure of the proxy suffers little loss in efficiency from adding further connections. +Provided in Figure \ref{fig:more-bandwidth-four} are results for each of two, three and four combined 2MB/s connections. Firstly, it is clear that the proxy consisting of 4 connections exceeds the throughput of the proxy consisting of 3 connections. Secondly, it appears that a linear trend is forming. This trend will be further evaluated in Section \ref{section:performance-evaluation}, but suggests that the structure of the proxy suffers little loss in efficiency from adding further connections. \subsection{Bandwidth Variation} This criteria judges the adaptability of the congestion control system in changing network conditions. To test this, the bandwidth of one of the local portal's connections is varied during an iperf3 throughput test. Thus far, bar graphs have been sufficient to show the results of each test. In this case, as the performance should now be time sensitive, I will be presenting a line graph. Due to the nature of the time in these tests, producing consistent enough results to produce error bars was not feasible. The data is also smoothed across the x-axis with a 5-point moving average, to avoid intense fluctuations caused by the interface rate limiting. -The criteria will be met if the following are true: the throughput begins at the rate of a time constant connection; the throughput stabilises at the altered rate after alteration; the throughput returns to the original rate after the rate is reset. +The criteria will be met if the following are true: the throughput begins at a constant rate; the throughput stabilises at a lower rate after the increase/decrease; the throughput returns to the original rate after the bandwidth returns. \begin{figure} \centering @@ -203,15 +202,15 @@ The criteria will be met if the following are true: the throughput begins at the \label{fig:capacity-changes} \end{figure} -The results are given in Figure \ref{fig:capacity-changes}. The decreasing series drops from 2+2MB/s connections, with a maximum throughput of 32Mbps, to 1+2MB/s connections, with a maximum throughput of 24Mbps. The increasing series increases from 1+1MB/s connections, with a maximum throughput of 16Mbps, to 1+2MB/s connections, with a maximum throughput of 24Mbps. The events occur at approximately the same time. The graph displays each series beginning at their constant amount, before converging at approximately 24Mbps in the center of the graph. Once the connection change is reversed, each series returns to its original throughput. This satisfies the success criteria for connection capacity changes. +The results are given in Figure \ref{fig:capacity-changes}. The decreasing series drops from 2+2MB/s connections, with a maximum throughput of 32Mbps, to 1+2MB/s connections, with a maximum throughput of 24Mbps. The increasing series increases from 1+1MB/s connections, with a maximum throughput of 16Mbps, to 1+2MB/s connections, with a maximum throughput of 24Mbps. The events occur at approximately the same time. The graph displays each series beginning at their constant rate, before converging at approximately 24Mbps in the center of the graph. Once the connection change is reversed, each series returns to its original throughput. This satisfies the success criteria for connection capacity changes. \subsection{Connection Loss} -This criteria judges the ability of the proxy as a whole to handle a complete connection loss while maintaining proportional throughput, and later regaining that capacity when the connection becomes available again. As the proxy has redundant connections, it is feasible for this to cause a minimal loss of service. Unfortunately, losing a connection causes significant instability with the proxy, so this extended goal has not been met. This is due to the interactions between the proxy and the system kernel, where the proxy has very little control of the underlying TCP connection. With future work on UDP I am hopefully that this will be eventually satisfied, but it is not with the current implementation. +This criteria judges the ability of the proxy as a whole to handle a complete connection loss while maintaining proportional throughput, and later regaining that capacity when the connection becomes available again. As the proxy has redundant connections, it is feasible for this to cause a minimal loss of service. Unfortunately, losing a connection causes significant instability with the proxy, so this extended goal has not been met. This is due to the interactions between the proxy and the system kernel, where the proxy has very little control of the underlying TCP connection. With future work on UDP I am hopeful that this will be eventually satisfied, but it is not with the current implementation. \subsection{Single Interface Remote Portal} -Similarly to section \ref{section:ip-spoofing-evaluation}, a remote portal with a single interface is employed within the standard testing structure for this section, using techniques detailed in section \ref{section:implementation-system-configuration}. By altering the routing tables such that all local traffic for the remote portal is sent to the local portal via the proxy, excluding the traffic for the proxy itself, the packets can be further forwarded from the local portal to the client which holds that IP address. As the standard testing structure employs a remote portal with a single interface, it is shown in each test result that this is a supported configuration, and thus this success criteria is met. +Similarly to Section \ref{section:ip-spoofing-evaluation}, a remote portal with a single interface is employed within the standard testing structure for this section, using techniques detailed in Section \ref{section:implementation-system-configuration}. By altering the routing tables such that all local traffic for the remote portal is sent to the local portal via the proxy, excluding the traffic for the proxy itself, the packets can be further forwarded from the local portal to the client which holds that IP address. As the standard testing structure employs a remote portal with a single interface, it is shown in each test result that this is a supported configuration, and thus this success criteria is met. \subsection{Connection Metric Values} diff --git a/thesis.tex b/thesis.tex index ef34b9d..f0fe826 100644 --- a/thesis.tex +++ b/thesis.tex @@ -114,8 +114,10 @@ \maketitle %\include{Dedication/dedication} +%TC:ignore \include{Declaration/declaration} \include{0_Proforma/proforma} +%TC:endignore %\include{Acknowledgement/acknowledgement} %\include{Abstract/abstract}