storj/scripts/test-aio.sh
paul cannon 1d78ddc3df
Test that network stalls don't cause indefinite hangs in uplink (#1530)
* test-network-stalls tests... network stalls!

in particular, right now, it just tests whether an uplink correctly
times out after some amount of time when one of the nodes it's talking
to suddenly goes offline.

This tool is meant to be run under `storj-sim network test`.

Also included here:

* fix storj-sim-related test scripts on Mac

the default storj config dir on Mac has a space in it
('~/Library/Application Support/Storj'), which breaks everywhere it
shows up in an unquoted variable in a sh/bash script. easy enough to fix
as a one-off, but quoting bash vars avoids a dozen other potential
problems too.

change a few things using `head -c` to use `dd`. `head -c` works,
but is not as widely understood (as evidenced by each of these scripts
getting through code review, one at a time, with the comments not
matching the numbers actually used).

* storj-sim reports PIDs of worker processes to test

so that the tests can cause unfortunate "accidents" to befall the worker
processes in the course of the test, and find out whether everything
reacts correctly.
2019-03-20 08:58:07 -06:00

118 lines
3.8 KiB
Bash
Executable File

#!/bin/bash
set -euo pipefail
TMP_DIR=$(mktemp -d -t tmp.XXXXXXXXXX)
CMP_DIR=$(mktemp -d -t tmp.XXXXXXXXXX)
# Clean up what we might have done
cleanup(){
echo ""
echo ""
echo ""
echo "=> Testing finished, logs to follow"
echo "=> Satellite logs"
docker logs storj_satellite_1
echo "=> Storagenode logs"
docker logs storj_storagenode_1
echo "=> Gateway logs"
docker logs storj_gateway_1
echo "=> Cleaning up"
rm -rf "$TMP_DIR" "$CMP_DIR"
# Hide any ERRORs and Faileds here as they are not relevant to the actual
# errors and failures of this test.
docker-compose down --rmi all 2>&1 | grep -v ERROR | grep -v Failed
}
trap cleanup EXIT
mkdir -p "$TMP_DIR"
mkdir -p "$CMP_DIR"
# Stand up production images in a local environment
docker-compose up -d satellite storagenode gateway
echo "=> Waiting for the gateway to be ready"
until docker logs storj_gateway_1 | grep -q Access; do
sleep 2
done
# Extract the keys for AWS client
access_key_id="$(docker logs storj_gateway_1 2>/dev/null| awk '/Access/{print $3; exit}')"
secret_access_key="$(docker logs storj_gateway_1 2>/dev/null| awk '/Secret/{print $3; exit}')"
echo "=> Access Key: $access_key_id"
echo "=> Secret Key: $secret_access_key"
export AWS_ACCESS_KEY_ID="$access_key_id"
export AWS_SECRET_ACCESS_KEY="$secret_access_key"
aws configure set default.region us-east-1
echo "=> Making test files"
random_bytes_file(){
size=$1
output=$2
dd if=/dev/urandom of="$output" count=1 bs="$size" >/dev/null 2>&1
}
random_bytes_file 1x1024x1024 "$TMP_DIR/small-upload-testfile" # create 1mb file of random bytes (inline)
random_bytes_file 5x1024x1024 "$TMP_DIR/big-upload-testfile" # create 5mb file of random bytes (remote)
random_bytes_file 5x1024 "$TMP_DIR/multipart-upload-testfile" # create 5kb file of random bytes (remote)
echo "=> Making bucket"
aws s3 --endpoint=http://localhost:7777/ mb s3://bucket
echo "=> Uploading test files"
aws configure set default.s3.multipart_threshold 1TB
aws s3 --endpoint=http://localhost:7777/ cp "$TMP_DIR/small-upload-testfile" s3://bucket/small-testfile
starttime="$(date +%s)"
while true; do
if aws s3 --endpoint=http://localhost:7777/ cp "$TMP_DIR/big-upload-testfile" s3://bucket/big-testfile; then
break
fi
echo "=> Large file failed, sleeping for a bit before trying again"
sleep 1
if [ $(( $starttime + 60 )) -lt $(date +%s) ]; then
echo "=> Failed to upload big-testfile for over a minute!"
exit 1
fi
done
# Wait 5 seconds to trigger any error related to one of the different intervals
sleep 5
aws configure set default.s3.multipart_threshold 4KB
aws s3 --endpoint=http://localhost:7777/ cp "$TMP_DIR/multipart-upload-testfile" s3://bucket/multipart-testfile
echo "=> Listing bucket"
aws s3 --endpoint=http://localhost:7777/ ls s3://bucket
echo "=> Downloading test files"
aws s3 --endpoint=http://localhost:7777/ cp s3://bucket/small-testfile "$CMP_DIR/small-download-testfile"
aws s3 --endpoint=http://localhost:7777/ cp s3://bucket/big-testfile "$CMP_DIR/big-download-testfile"
aws s3 --endpoint=http://localhost:7777/ cp s3://bucket/multipart-testfile "$CMP_DIR/multipart-download-testfile"
echo "=> Removing bucket"
aws s3 --endpoint=http://localhost:7777/ rb s3://bucket --force
echo "=> Comparing test files downloaded with uploaded versions"
if cmp "$TMP_DIR/small-upload-testfile" "$CMP_DIR/small-download-testfile"
then
echo "Downloaded file matches uploaded file"
else
echo "Downloaded file does not match uploaded file"
exit 1
fi
if cmp "$TMP_DIR/big-upload-testfile" "$CMP_DIR/big-download-testfile"
then
echo "Downloaded file matches uploaded file"
else
echo "Downloaded file does not match uploaded file"
exit 1
fi
if cmp "$TMP_DIR/multipart-upload-testfile" "$CMP_DIR/multipart-download-testfile"
then
echo "Downloaded file matches uploaded file"
else
echo "Downloaded file does not match uploaded file"
exit 1
fi