This chapter will discuss the methods used to evaluate my project and the results gained. The results will be discussed in the context of the success criteria laid out in the Project Proposal.
This evaluation shows that a network using my method of combining Internet connections can see vastly superior network performance to one without. It will show the benefits to throughput, availability, and adaptability.
I performed my experiments on a local Proxmox\footnote{\url{https://proxmox.com}} server. To encourage frequent and thorough testing, a harness was built in Python, allowing tests to be added easily and repeated with any code changes.
Proxmox was chosen due to its RESTful API, for integration with Python. It provides the required tools to limit connection speeds and disable connections. The server that ran these tests holds only a single other virtual machine which handles routing. This limits the effect of external factors on the tests.
The tests are performed on a Dell R710 Server with the following specifications:
\vspace{5mm}\noindent\textbf{CPU(s)} 16 x Intel(R) Xeon(R) CPU X5667 @ 3.07GHz (2 Sockets)
\\\noindent\textbf{Memory} 6 x 2GB DDR3 ECC RDIMMS
\\\noindent\textbf{Kernel} Linux 5.4 LTS
\section{Line Graphs}
The majority of data presented in this section will be in the form of line graphs. These are generated in a consistent format, using a script found in appendix \ref{appendix:graph_generation}.
\caption{The structure of graphs throughout this section}
\label{fig:errorbars}
\end{figure}
In figure \ref{fig:errorbars}, examples are shown of the same graph without any error bars, with error bars on the X axis, and with error bars on the Y axis. Error bars for the X axis are plotted as the range of all of the results, while error bars on the Y axis are plotted as $1.5*\sigma$, where $\sigma$ represents the standard deviation of the results.
In figure \ref{fig:errorbars-x}, it is shown that the range of the timestamps provided is incredibly tight. For this reason, I will not be including error bars in the X axis on the graphs shown from this point onwards.
In figure \ref{fig:errorbars-y}, it can be seen that the error bars on the Y axis are far more significant. Thus, error bars will continue to be included in the Y axis.
To generate these results, a fresh set of VMs (Virtual Machines) are created and the software installed on them. Once this is complete, each test is repeated until the coefficient of variance ($\sigma/\mu$, where $\mu$ is the arithmetic mean and $\sigma$ the standard deviation) is below a desired level, or too many attempts have been completed. The number of attempts taken for each series will be shown in the legend of each graph.
\begin{figure}
\centering
\begin{tikzpicture}[
squarednode/.style={rectangle, draw=black!60, fill=red!5, very thick, minimum size=5mm},
]
% Nodes
\node[squarednode] at (0,0) (speedtest) {Speed Test Server};
\node[squarednode] at (4,0) (remoteportal) {Remote Portal};
\node[squarednode] at (8,0) (localportal) {Local Portal};
The network structure of all standard tests is shown in figure \ref{fig:standard-network-structure}. Any deviations from this structure will be mentioned. The Local Portal has as many interfaces as referenced in any test, plus one to connect to the client. All Virtual Machines also have an additional interface for management, but this has no effect on the tests.
The performance gains measured are visible in both directions (inbound and outbound to the client). The graphs shown in this evaluation section are inbound unless stated otherwise, with the outbound graphs being available in Appendix \ref{appendix:outbound-graphs}.
\caption{The same test performed both inbound and outbound}
\label{fig:example-inbound-outbound}
\end{figure}
Figure \ref{fig:example-inbound-outbound} shows two graphs of the same test - one for the inbound performance and one for the outbound. It can be seen that both graphs show the same shape.
This goal was to ensure the Client could use its network interface as if it really had that IP. This is achieved through Policy Based Routing. Example scripts are shown in figure \ref{fig:policy-based-routing-script-local-portal}. Linux also requires the kernel parameter \mintinline{shell-session}{net.ipv4.ip_forward} to be set to 1.
\caption{Unequal connections compared against equal of both sides}
\label{fig:unequal-connections}
\end{figure}
This is demonstrated by showing that $1x1MB +1x2MB$ connections can exceed the performance of $2x1MB$ connections. The results for this can be seen in figure \ref{fig:unequal-connections}, compared against $2x2MB$ and $1x2MB$. It can be seen that the uneven connections fall between the two, which is as expected.
\subsection{More Bandwidth over Four Equal Connections}
This criteria is about throughput increasing with the number of equal connections added. It is demonstrated by comparing the throughput of $2x1MB$, $3x1MB$ and $4x1MB$ connections. This can be seen in figure \ref{fig:equal-connections-scaling-1MB}. A further example is provided of $2x2MB$, $3x2MB$ and $4x2MB$ in figure \ref{fig:equal-connections-scaling-2MB}.
The single interface Remote Portal is achieved using a similar set of commands to IP Spoofing. The majority of the work is again done by policy based routing, with some kernel parameters needing to be set too. A sample script is shown in figure \ref{fig:policy-based-routing-script-remote-portal}.
The discussion of success criteria above used slow network connections to test scaling in certain situations. This section will focus on testing how the solution scales, in terms of faster individual connections, and with many more connections. Further, all of the above tests were automated and carried out entirely on virtual hardware. This section will show some 'real-world' data, using a Raspberry Pi 4B and real Internet connections.