239 lines
20 KiB
TeX
239 lines
20 KiB
TeX
%!TEX root = ../thesis.tex
|
|
%*******************************************************************************
|
|
%****************************** Fourth Chapter *********************************
|
|
%*******************************************************************************
|
|
\chapter{Evaluation}
|
|
|
|
% **************************** Define Graphics Path **************************
|
|
\ifpdf
|
|
\graphicspath{{4_Evaluation/Figs/Raster/}{4_Evaluation/Figs/PDF/}{4_Evaluation/Figs/}}
|
|
\else
|
|
\graphicspath{{4_Evaluation/Figs/Vector/}{4_Evaluation/Figs/}}
|
|
\fi
|
|
|
|
This chapter will discuss the methods used to evaluate my project and the results obtained. The results will be discussed in the context of the success criteria laid out in the Project Proposal (Appendix \ref{appendix:project-proposal}). This evaluation shows that a network using my method of combining Internet connections can see vastly superior network performance to one without. It will show the benefits to throughput, availability, and adaptability.
|
|
|
|
The tests are performed on a Dell R710 Server with the following specifications:
|
|
|
|
\vspace{5mm}
|
|
\begin{tabular}{ll}
|
|
\textbf{CPU(s)} & 16 x Intel(R) Xeon(R) CPU X5667 @ 3.07GHz (2 Sockets) \\
|
|
\textbf{Memory} & 6 x 2GB DDR3 ECC RDIMMS \\
|
|
\textbf{Kernel} & Linux 5.4 LTS
|
|
\end{tabular}
|
|
|
|
When presenting data, error bars are given of the Inter-Quartile Range (IQR) of the data, with the plotted point being the median.
|
|
|
|
\section{Success Criteria}
|
|
|
|
\subsection{Flow Maintained}
|
|
|
|
The results for whether a flow can be maintained during a single connection loss are achieved using an iperf3 UDP test. The UDP test runs at a fixed bitrate, and measures the quantity of datagrams lost in the transfer. Three tests will be performed on a proxy with two connections: both connections remain up, one connection remains up, and both connections are lost. To satisfy this success criteria, the single connection lost may have a small amount of loss, while losing both connections should terminate the test.
|
|
|
|
\begin{figure}
|
|
\begin{Verbatim}[fontsize=\small]
|
|
Connecting to host X.X.X.X, port 5201
|
|
[ 5] local X.X.X.Y port 43039 connected to X.X.X.X port 5201
|
|
[ ID] Interval Transfer Bitrate Total Datagrams
|
|
[ 5] 0.00-1.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
[ 5] 1.00-2.00 sec 127 KBytes 1.04 Mbits/sec 90
|
|
[ 5] 2.00-3.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
[ 5] 3.00-4.00 sec 127 KBytes 1.04 Mbits/sec 90
|
|
[ 5] 4.00-5.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
- - - - - - - - - - - - - - - - - - - - - - - - -
|
|
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
|
|
[ 5] 0.00-5.00 sec 641 KBytes 1.05 Mbits/sec 0.000 ms 0/453 (0%) sender
|
|
[ 5] 0.00-5.04 sec 641 KBytes 1.04 Mbits/sec 0.092 ms 0/453 (0%) receiver
|
|
\end{Verbatim}
|
|
\caption{iperf3 UDP results with two stable connections (inbound).}
|
|
\label{fig:maintained-both-connections-alive}
|
|
\end{figure}
|
|
|
|
\begin{figure}
|
|
\begin{Verbatim}[fontsize=\small]
|
|
Connecting to host X.X.X.X, port 5201
|
|
[ 5] local X.X.X.Y port 49929 connected to X.X.X.X port 5201
|
|
[ ID] Interval Transfer Bitrate Total Datagrams
|
|
[ 5] 0.00-1.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
[ 5] 1.00-2.00 sec 127 KBytes 1.04 Mbits/sec 90
|
|
[ 5] 2.00-3.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
[ 5] 3.00-4.00 sec 127 KBytes 1.04 Mbits/sec 90
|
|
[ 5] 4.00-5.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
- - - - - - - - - - - - - - - - - - - - - - - - -
|
|
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
|
|
[ 5] 0.00-5.00 sec 641 KBytes 1.05 Mbits/sec 0.000 ms 0/453 (0%) sender
|
|
[ 5] 0.00-5.04 sec 635 KBytes 1.03 Mbits/sec 0.115 ms 4/453 (0.88%) receiver
|
|
\end{Verbatim}
|
|
\caption{iperf3 UDP results with a single connections loss (inbound).}
|
|
\label{fig:maintained-one-connections-down}
|
|
\end{figure}
|
|
|
|
\begin{figure}
|
|
\begin{Verbatim}[fontsize=\small]
|
|
Connecting to host X.X.X.X, port 5201
|
|
[ 5] local 1.1.1.1 port 51581 connected to 1.1.1.2 port 5201
|
|
[ ID] Interval Transfer Bitrate Total Datagrams
|
|
[ 5] 0.00-1.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
[ 5] 1.00-2.00 sec 127 KBytes 1.04 Mbits/sec 90
|
|
[ 5] 2.00-3.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
[ 5] 3.00-4.00 sec 129 KBytes 1.05 Mbits/sec 91
|
|
\end{Verbatim}
|
|
\caption{iperf3 UDP results with a total connection loss (inbound).}
|
|
\label{fig:maintained-both-connections-down}
|
|
\end{figure}
|
|
|
|
These results are given in figures \ref{fig:maintained-both-connections-alive}, \ref{fig:maintained-one-connections-down} and \ref{fig:maintained-both-connections-down} respectively. The results are as expected: no connection loss handles the 1MBps stream with no problems, and therefore no packets are lost, one connection loss causes slight packet loss ($0.88\%$) but the test is able to continue, and a complete connection loss stalls the test. Given the consistent external IP, this shows that a flow can be maintained through a single connection loss, with only a small loss of packets. This level of packet loss represents some loss on a phone call that lasts approximately 45ms, after which the call continues gracefully. This satisfies the success criteria.
|
|
|
|
\subsection{Bidirectional Performance Gains}
|
|
|
|
To demonstrate that all performance gains are bidirectional, I will provide graphs both inbound and outbound to the client for each performance test executed in this evaluation. This will sufficiently show the performance gains in each case. Inbound tests occur with the test server running on the proxy client and the test client running outside, while outbound tests occur with the test server running outside of the proxy and reaching in.
|
|
|
|
To demonstrate this somewhat succinctly, a pair of graphs for the same test in a common case will be shown. To demonstrate that this requirement is satisfied for all cases, for each graph of results presented for the basic success criteria, the graph for the alternative direction will be provided in appendix \ref{appendix:outbound-graphs}.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.75\textwidth]{graphs/bidirectional-comparison}
|
|
\caption{Comparing the performance of packets inbound to the client to outbound from the client in three different test conditions.}
|
|
\label{fig:bidirectional-comparison}
|
|
\end{figure}
|
|
|
|
Figure \ref{fig:bidirectional-comparison} has two series for the same set of tests - one for the inbound (reaching in to the client, or download) performance and one for the outbound (the client reaching out, or upload) performance. The graphs align neatly comparatively, however, there is a slight preference to outbound flows. This is due to the outbound flows being spread between interfaces, which avoids waiting for the OS to finish locking an interface quite as often. In each case, both inbound and outbound performance satisfy the success criteria, so this is satisfied.
|
|
|
|
\subsection{IP Spoofing}
|
|
\label{section:ip-spoofing-evaluation}
|
|
|
|
Demonstrating that the IP of the client can be set to the IP of the remote proxy is achieved, as each test in this evaluation relies on this fact. When allocating virtual machines to test on, the client is given the IP of the remote proxy. In the given network structure, the speed test server, remote proxy and local proxy are each connected to one virtual switch, which acts as a mock Internet. There is then a separate virtual switch, which connects an additional interface of the local proxy to the client. The IP addresses of the interfaces used in these tests are listed in Figure \ref{fig:standard-network-structure-ips}. The IP addresses of the public interfaces are represented by letters, as they use arbitrary public IP addresses to ensure no local network firewall rules impact the configuration.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\begin{tabular}{c|c|c}
|
|
Machine & Interface & IP Address \\
|
|
\hline
|
|
Speed Test Server & eth0 & \emph{A} \\
|
|
\hline
|
|
Remote Proxy & eth0 & \emph{B} \\
|
|
\hline
|
|
\multirow{5}{*}{Local Proxy} & eth0 & \emph{C0} \\
|
|
& eth1 & \emph{C1} \\
|
|
& \vdots & \vdots \\
|
|
& ethN & \emph{CN} \\
|
|
& eth\{N+1\} & 192.168.1.1 \\
|
|
\hline
|
|
Client & eth0 & \emph{B}
|
|
\end{tabular}
|
|
\caption{The IP layout of the test network structure.}
|
|
\label{fig:standard-network-structure-ips}
|
|
\end{figure}
|
|
|
|
It is shown that the client in this testing setup shares an IP address with the remote proxy. The details of this configuration are provided in Figure \ref{section:implementation-system-configuration}. This satisfies the success criteria.
|
|
|
|
\subsection{Security}
|
|
|
|
Success for security involves providing security no worse than a standard connection. This is achieved by using Message Authentication Codes, Replay Protection and extra data for connection authentication, described in detail in Section \ref{section:preparation-security}. Further, Section \ref{section:layered-security} provides an argument that the proxying of packets is made secure by operating in a secure overlay network, such as a VPN. This ensures that security can be maintained, regardless of changes in the security landscape, by composing my proxy with additional security software.
|
|
|
|
\subsection{More Bandwidth over Two Equal Connections}
|
|
|
|
To demonstrate that more bandwidth is available over two equal connections through this proxy than one without, I will compare the iperf3 throughput between the two cases. Further, I will provide a comparison point against a single connection of the higher bandwidth, as this is the maximum theoretical performance of combining the two lower bandwidth connections.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\begin{subfigure}{.7\textwidth}
|
|
\includegraphics[width=0.9\linewidth]{graphs/more-bandwidth-equal-a-inbound}
|
|
\caption{Throughput of 1+1MB/s connections compared with 1MB/s and 2MB/s (inbound).}
|
|
\label{fig:more-bandwidth-equal-lesser}
|
|
\end{subfigure}
|
|
\begin{subfigure}{.7\textwidth}
|
|
\includegraphics[width=0.9\linewidth]{graphs/more-bandwidth-equal-b-inbound}
|
|
\caption{Throughput of 2+2MB/s connections compared with 2MB/s and 4MB/s (inbound).}
|
|
\label{fig:more-bandwidth-equal-greater}
|
|
\end{subfigure}
|
|
\caption{Graphs showing that the throughput of two connections proxied lie between one connection of the same speed and one connection of double the speed}
|
|
\label{fig:more-bandwidth-equal}
|
|
\end{figure}
|
|
|
|
The results of these tests are given in Figure \ref{fig:more-bandwidth-equal}, for both a pair of 1MBps connections and a pair of 2MBps connections. To satisfy this success criteria, the proxied bar on each graph should exceed the throughput of the direct bar of equal bandwidth. It can be seen in both cases that this occurs, and thus the success criteria is met. The throughput far exceeds the single direct connection, and is closer to the single double bandwidth connection than the single equal bandwidth connection, demonstrating a good portion of the maximum performance is achieved ($92.5\%$ for the 1+1MB/s proxy, and $88.5\%$ for the 2+2MB/s proxy).
|
|
|
|
\section{Extended Goals}
|
|
|
|
\subsection{More Bandwidth over Unequal Connections}
|
|
|
|
For showing improved throughput over connections which are not equal, three results will be compared. Connections of speed $x+x$, speeds $x+y$, and speeds $y+y$ will be shown, where $x < y$. To show that unequal connections exceed the performance of a pair of slower connections, the results for speeds $x+y$ should lie between $x+x$ and $y+y$. Further, to show that percentage throughput is invariant to the balance of connection throughput, the unequal connections should lie halfway between the two equal connection results.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\begin{subfigure}{0.7\textwidth}
|
|
\includegraphics[width=0.9\linewidth]{graphs/more-bandwidth-unequal-a-inbound}
|
|
\caption{Bandwidth of 1+2MB/s connections compared to 1+1MB/s connections and 2+2MB/s connections.}
|
|
\label{fig:more-bandwidth-unequal-lesser}
|
|
\end{subfigure}
|
|
\begin{subfigure}{0.7\textwidth}
|
|
\includegraphics[width=0.9\linewidth]{graphs/more-bandwidth-unequal-b-inbound}
|
|
\caption{Bandwidth of 2+4MB/s connections compared to 2+2MB/s connections and 4+4MB/s connections.}
|
|
\label{fig:more-bandwidth-unequal-greater}
|
|
\end{subfigure}
|
|
\caption{Graphs to demonstrate that the proxy appropriately balances between imbalanced connections, resulting in near-maximal throughput.}
|
|
\label{fig:more-bandwidth-unequal}
|
|
\end{figure}
|
|
|
|
Two sets of results are provided - one for 1MBps and 2MBps connections, and another for 2MBps and 4MBps connections. In both cases, it can be seen that the proxy with unequal connections lies between the equal connection proxies. Further, it can be seen that both unequal proxied connections lie approximately halfway between the equal pairs. This suggests that the proxy design is successful in being invariant to the static balance of connection throughput.
|
|
|
|
\subsection{More Bandwidth over Four Equal Connections}
|
|
|
|
This criteria expands on the scalability in terms of number of connections of the proxy. Specifically, comparing the performance of three connections against four. To fulfil this, the results for each of two, three and four connections are included on each graph. This allows the trend of performance with an increasing number of connections to begin being visualised, which is expanded upon further in Section \ref{section:performance-evaluation}.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.7\linewidth]{graphs/more-bandwidth-four-b-inbound}
|
|
\caption{Scaling of 2-4 equal bandwidth connections when combined.}
|
|
\label{fig:more-bandwidth-four}
|
|
\end{figure}
|
|
|
|
Provided in Figure \ref{fig:more-bandwidth-four} are results for each of 2, 3 and 4 2MBps connections. Firstly, it is clear that the proxy consisting of 4 connections exceeds the throughput of the proxy consisting of 3 connections. Secondly, it appears that a linear trend is forming. This trends will be further evaluated in Section \ref{section:performance-evaluation}, but suggests that the structure of the proxy suffers little loss in efficiency from adding further connections.
|
|
|
|
\subsection{Bandwidth Variation}
|
|
|
|
This criteria judges the adaptability of the congestion control system in changing network conditions. To test this, the bandwidth of one of the local portal's connections is varied during an iperf3 throughput test. Thus far, bar graphs have been sufficient to show the results of each test. In this case, as the performance should now be time sensitive, I will be presenting a line graph. Due to the nature of the time in these tests, producing consistent enough results to produce error bars was not feasible. The data is also smoothed across the x-axis with a 5-point moving average, to avoid intense fluctuations caused by the interface rate limiting.
|
|
|
|
The criteria will be met if the following are true: the throughput begins at the rate of a time constant connection; the throughput stabilises at the altered rate after alteration; the throughput returns to the original rate after the rate is reset.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.8\textwidth]{graphs/connection_capacity_changes}
|
|
\caption{Connection capacity increasing and decreasing over time. The decrease is from 2+2MB/s connections to 1+2MB/s connections, and the increase from 1+1MB/s connections to 1+2MB/s connections.}
|
|
\label{fig:capacity-changes}
|
|
\end{figure}
|
|
|
|
The results are given in Figure \ref{fig:capacity-changes}. The decreasing series drops from 2+2MB/s connections, with a maximum throughput of 32Mbps, to 1+2MB/s connections, with a maximum throughput of 24Mbps. The increasing series increases from 1+1MB/s connections, with a maximum throughput of 16Mbps, to 1+2MB/s connections, with a maximum throughput of 24Mbps. The events occur at approximately the same time. The graph displays each series beginning at their constant amount, before converging at approximately 24Mbps in the center of the graph. Once the connection change is reversed, each series returns to its original throughput. This satisfies the success criteria for connection capacity changes.
|
|
|
|
\subsection{Connection Loss}
|
|
|
|
This criteria judges the ability of the proxy as a whole to handle a complete connection loss while maintaining proportional throughput, and later regaining that capacity when the connection becomes available again. As the proxy has redundant connections, it is feasible for this to cause a minimal loss of service. Unfortunately, losing a connection causes significant instability with the proxy, so this extended goal has not been met. This is due to the interactions between the proxy and the system kernel, where the proxy has very little control of the underlying TCP connection. With future work on UDP I am hopefully that this will be eventually satisfied, but it is not with the current implementation.
|
|
|
|
\subsection{Single Interface Remote Portal}
|
|
|
|
Similarly to section \ref{section:ip-spoofing-evaluation}, a remote portal with a single interface is employed within the standard testing structure for this section, using techniques detailed in section \ref{section:implementation-system-configuration}. By altering the routing tables such that all local traffic for the remote portal is sent to the local portal via the proxy, excluding the traffic for the proxy itself, the packets can be further forwarded from the local portal to the client which holds that IP address. As the standard testing structure employs a remote portal with a single interface, it is shown in each test result that this is a supported configuration, and thus this success criteria is met.
|
|
|
|
\subsection{Connection Metric Values}
|
|
|
|
The extended goal of connection metric values has not been implemented. Instead, peers which only transfer data in one direction were implemented, which covers some of the use cases for metric values. Though metric values for connections would have been useful in some cases, they do not represent the standard usage of the software, and the added complexity of managing live peers was deemed unnecessary for the core software. Instead, I would consider providing a better interface to control the software externally, which would allow a separate piece of software to manage live peers. This has not been completed at this time.
|
|
|
|
\section{Stretch Goals}
|
|
|
|
\subsection{UDP Proxy Flows}
|
|
|
|
UDP flows are implemented, and provide a solid base for UDP testing and development. The present implementation of a New Reno imitating congestion control mechanism still has some implementation flaws, meaning that UDP is not yet feasible for use. However, the API for writing congestion control mechanisms is strong, and some of the future work suggested in Section \ref{section:future-work} could be developed on this base, so that much is a success.
|
|
|
|
\section{Performance Evaluation}
|
|
\label{section:performance-evaluation}
|
|
|
|
The discussion of success criteria above used relatively slow network connections to test scaling in certain situations, while ensuring that hardware limitations have no impact on the tests. This section provides a brief analysis of how this solution would scale to providing a higher bandwidth connection, specifically by adding network connections.
|
|
|
|
The results of these tests are shown in Figure \ref{fig:n-connections-scaling}. Each of $1MB/s$, $2MB/s$ and $4MB/s$ capacity links are tested with 1 to 8 connections. The throughput demonstrated is largely linear, with a suggestion that eight $4MB/s$ connections are approaching the software's limits. This result is very promising, as it shows that the software can handle a large number of connections. While this is quite limiting for some higher download speed connections, many upload speeds are far slower, and would benefit from this quantity of links.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\includegraphics[width=0.75\textwidth]{graphs/n_connections_scaling}
|
|
\caption{Scaling of proxy throughput based on number of connections, for three speeds of connection.}
|
|
\label{fig:n-connections-scaling}
|
|
\end{figure}
|