2019-01-24 20:15:10 +00:00
// Copyright (C) 2019 Storj Labs, Inc.
2018-04-18 17:55:28 +01:00
// See LICENSE for copying information.
2018-06-13 19:22:32 +01:00
package overlay
2018-04-18 16:34:15 +01:00
import (
2018-05-16 19:47:59 +01:00
"context"
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
"fmt"
2019-05-30 18:35:04 +01:00
"net"
2019-04-04 17:34:36 +01:00
"time"
2018-04-18 16:34:15 +01:00
2018-06-13 19:22:32 +01:00
"github.com/zeebo/errs"
2018-12-22 04:51:42 +00:00
"go.uber.org/zap"
2023-06-26 09:25:13 +01:00
"golang.org/x/exp/maps"
2018-11-16 16:31:14 +00:00
2019-12-27 11:48:47 +00:00
"storj.io/common/pb"
"storj.io/common/storj"
2021-10-29 18:44:44 +01:00
"storj.io/common/storj/location"
2022-06-28 12:53:39 +01:00
"storj.io/common/sync2"
2022-09-30 16:26:24 +01:00
"storj.io/private/version"
2021-10-29 18:44:44 +01:00
"storj.io/storj/satellite/geoip"
2021-04-21 13:42:57 +01:00
"storj.io/storj/satellite/metabase"
2022-10-31 21:33:17 +00:00
"storj.io/storj/satellite/nodeevents"
2023-06-30 11:35:07 +01:00
"storj.io/storj/satellite/nodeselection/uploadselection"
2018-04-18 16:34:15 +01:00
)
2020-07-16 15:18:02 +01:00
// ErrEmptyNode is returned when the nodeID is empty.
2018-12-17 18:47:26 +00:00
var ErrEmptyNode = errs . New ( "empty node ID" )
2020-07-16 15:18:02 +01:00
// ErrNodeNotFound is returned if a node does not exist in database.
2019-03-29 08:53:43 +00:00
var ErrNodeNotFound = errs . Class ( "node not found" )
2018-11-21 17:31:27 +00:00
2020-07-16 15:18:02 +01:00
// ErrNodeOffline is returned if a nodes is offline.
2019-05-27 12:13:47 +01:00
var ErrNodeOffline = errs . Class ( "node is offline" )
2020-07-16 15:18:02 +01:00
// ErrNodeDisqualified is returned if a nodes is disqualified.
2019-06-24 15:46:10 +01:00
var ErrNodeDisqualified = errs . Class ( "node is disqualified" )
2020-08-13 13:00:56 +01:00
// ErrNodeFinishedGE is returned if a node has finished graceful exit.
var ErrNodeFinishedGE = errs . Class ( "node finished graceful exit" )
2020-07-16 15:18:02 +01:00
// ErrNotEnoughNodes is when selecting nodes failed with the given parameters.
2019-01-31 18:49:00 +00:00
var ErrNotEnoughNodes = errs . Class ( "not enough nodes" )
2023-02-15 16:41:26 +00:00
// ErrLowDifficulty is when the node id's difficulty is too low.
var ErrLowDifficulty = errs . Class ( "node id difficulty too low" )
2020-12-05 16:01:42 +00:00
// DB implements the database for overlay.Service.
2019-09-10 14:24:16 +01:00
//
// architecture: Database
2019-01-15 16:08:45 +00:00
type DB interface {
2021-11-08 20:51:04 +00:00
// GetOnlineNodesForAuditRepair returns a map of nodes for the supplied nodeIDs.
// The return value contains necessary information to create orders as well as nodes'
// current reputation status.
GetOnlineNodesForAuditRepair ( ctx context . Context , nodeIDs [ ] storj . NodeID , onlineWindow time . Duration ) ( map [ storj . NodeID ] * NodeReputation , error )
2019-02-11 19:24:51 +00:00
// SelectStorageNodes looks up nodes based on criteria
2023-06-30 11:35:07 +01:00
SelectStorageNodes ( ctx context . Context , totalNeededNodes , newNodeCount int , criteria * NodeCriteria ) ( [ ] * uploadselection . SelectedNode , error )
2020-04-14 21:50:02 +01:00
// SelectAllStorageNodesUpload returns all nodes that qualify to store data, organized as reputable nodes and new nodes
2023-06-30 11:35:07 +01:00
SelectAllStorageNodesUpload ( ctx context . Context , selectionCfg NodeSelectionConfig ) ( reputable , new [ ] * uploadselection . SelectedNode , err error )
2021-01-28 14:33:53 +00:00
// SelectAllStorageNodesDownload returns a nodes that are ready for downloading
2023-06-30 11:35:07 +01:00
SelectAllStorageNodesDownload ( ctx context . Context , onlineWindow time . Duration , asOf AsOfSystemTimeConfig ) ( [ ] * uploadselection . SelectedNode , error )
2020-04-14 21:50:02 +01:00
2019-01-15 16:08:45 +00:00
// Get looks up the node by nodeID
2019-04-04 17:34:36 +01:00
Get ( ctx context . Context , nodeID storj . NodeID ) ( * NodeDossier , error )
2022-03-03 00:23:11 +00:00
// KnownReliableInExcludedCountries filters healthy nodes that are in excluded countries.
KnownReliableInExcludedCountries ( context . Context , * NodeCriteria , storj . NodeIDList ) ( storj . NodeIDList , error )
2019-12-16 13:45:13 +00:00
// KnownReliable filters a set of nodes to reliable (online and qualified) nodes.
2023-06-30 11:35:07 +01:00
KnownReliable ( ctx context . Context , nodeIDs storj . NodeIDList , onlineWindow , asOfSystemInterval time . Duration ) ( online [ ] uploadselection . SelectedNode , offline [ ] uploadselection . SelectedNode , err error )
2023-06-29 14:26:52 +01:00
// Reliable returns all nodes that are reliable (separated by whether they are currently online or offline).
2023-06-29 09:38:47 +01:00
Reliable ( ctx context . Context , onlineWindow , asOfSystemInterval time . Duration ) ( online [ ] uploadselection . SelectedNode , offline [ ] uploadselection . SelectedNode , err error )
2021-06-17 15:01:21 +01:00
// UpdateReputation updates the DB columns for all reputation fields in ReputationStatus.
2021-10-25 21:40:41 +01:00
UpdateReputation ( ctx context . Context , id storj . NodeID , request ReputationUpdate ) error
2019-04-10 07:04:24 +01:00
// UpdateNodeInfo updates node dossier with info requested from the node itself like node type, email, wallet, capacity, and version.
2020-06-16 13:16:55 +01:00
UpdateNodeInfo ( ctx context . Context , node storj . NodeID , nodeInfo * InfoResponse ) ( stats * NodeDossier , err error )
2019-09-19 19:37:31 +01:00
// UpdateCheckIn updates a single storagenode's check-in stats.
2019-11-15 22:43:06 +00:00
UpdateCheckIn ( ctx context . Context , node NodeCheckInInfo , timestamp time . Time , config NodeSelectionConfig ) ( err error )
2022-11-22 00:10:27 +00:00
// SetNodeContained updates the contained field for the node record.
SetNodeContained ( ctx context . Context , node storj . NodeID , contained bool ) ( err error )
2023-01-26 00:09:51 +00:00
// SetAllContainedNodes updates the contained field for all nodes, as necessary.
SetAllContainedNodes ( ctx context . Context , containedNodes [ ] storj . NodeID ) ( err error )
2019-08-27 13:37:42 +01:00
// AllPieceCounts returns a map of node IDs to piece counts from the db.
2022-09-13 09:37:54 +01:00
AllPieceCounts ( ctx context . Context ) ( pieceCounts map [ storj . NodeID ] int64 , err error )
2019-08-27 13:37:42 +01:00
// UpdatePieceCounts sets the piece count field for the given node IDs.
2022-09-13 09:37:54 +01:00
UpdatePieceCounts ( ctx context . Context , pieceCounts map [ storj . NodeID ] int64 ) ( err error )
2019-10-01 23:18:21 +01:00
// UpdateExitStatus is used to update a node's graceful exit status.
2019-10-29 20:22:20 +00:00
UpdateExitStatus ( ctx context . Context , request * ExitStatusRequest ) ( _ * NodeDossier , err error )
2019-10-01 23:18:21 +01:00
// GetExitingNodes returns nodes who have initiated a graceful exit, but have not completed it.
2019-10-24 17:24:42 +01:00
GetExitingNodes ( ctx context . Context ) ( exitingNodes [ ] * ExitStatus , err error )
2019-10-23 02:06:01 +01:00
// GetGracefulExitCompletedByTimeFrame returns nodes who have completed graceful exit within a time window (time window is around graceful exit completion).
GetGracefulExitCompletedByTimeFrame ( ctx context . Context , begin , end time . Time ) ( exitedNodes storj . NodeIDList , err error )
// GetGracefulExitIncompleteByTimeFrame returns nodes who have initiated, but not completed graceful exit within a time window (time window is around graceful exit initiation).
GetGracefulExitIncompleteByTimeFrame ( ctx context . Context , begin , end time . Time ) ( exitingNodes storj . NodeIDList , err error )
// GetExitStatus returns a node's graceful exit status.
2019-10-11 22:18:05 +01:00
GetExitStatus ( ctx context . Context , nodeID storj . NodeID ) ( exitStatus * ExitStatus , err error )
2019-11-06 21:38:52 +00:00
2022-12-13 20:40:15 +00:00
// GetNodesNetwork returns the last_net subnet for each storage node, order is not guaranteed.
2020-03-06 22:04:23 +00:00
GetNodesNetwork ( ctx context . Context , nodeIDs [ ] storj . NodeID ) ( nodeNets [ ] string , err error )
2022-12-13 20:40:15 +00:00
// GetNodesNetworkInOrder returns the last_net subnet for each storage node in order of the requested nodeIDs.
GetNodesNetworkInOrder ( ctx context . Context , nodeIDs [ ] storj . NodeID ) ( nodeNets [ ] string , err error )
2019-12-30 17:10:24 +00:00
2020-01-03 19:11:47 +00:00
// DisqualifyNode disqualifies a storage node.
2022-10-11 17:13:29 +01:00
DisqualifyNode ( ctx context . Context , nodeID storj . NodeID , disqualifiedAt time . Time , reason DisqualificationReason ) ( email string , err error )
2021-07-07 20:20:23 +01:00
2022-10-07 21:24:43 +01:00
// GetOfflineNodesForEmail gets offline nodes in need of an email.
GetOfflineNodesForEmail ( ctx context . Context , offlineWindow time . Duration , cutoff time . Duration , cooldown time . Duration , limit int ) ( nodes map [ storj . NodeID ] string , err error )
// UpdateLastOfflineEmail updates last_offline_email for a list of nodes.
UpdateLastOfflineEmail ( ctx context . Context , nodeIDs storj . NodeIDList , timestamp time . Time ) ( err error )
2021-04-22 14:43:56 +01:00
// DQNodesLastSeenBefore disqualifies a limited number of nodes where last_contact_success < cutoff except those already disqualified
// or gracefully exited or where last_contact_success = '0001-01-01 00:00:00+00'.
2022-10-12 18:56:15 +01:00
DQNodesLastSeenBefore ( ctx context . Context , cutoff time . Time , limit int ) ( nodeEmails map [ storj . NodeID ] string , count int , err error )
2020-03-09 15:35:54 +00:00
2021-07-15 15:14:13 +01:00
// TestSuspendNodeUnknownAudit suspends a storage node for unknown audits.
TestSuspendNodeUnknownAudit ( ctx context . Context , nodeID storj . NodeID , suspendedAt time . Time ) ( err error )
// TestUnsuspendNodeUnknownAudit unsuspends a storage node for unknown audits.
TestUnsuspendNodeUnknownAudit ( ctx context . Context , nodeID storj . NodeID ) ( err error )
2020-07-08 15:28:49 +01:00
// TestVetNode directly sets a node's vetted_at timestamp to make testing easier
TestVetNode ( ctx context . Context , nodeID storj . NodeID ) ( vettedTime * time . Time , err error )
// TestUnvetNode directly sets a node's vetted_at timestamp to null to make testing easier
TestUnvetNode ( ctx context . Context , nodeID storj . NodeID ) ( err error )
2023-05-16 15:08:52 +01:00
// TestSuspendNodeOffline directly sets a node's offline_suspended timestamp to make testing easier
2021-07-07 20:20:23 +01:00
TestSuspendNodeOffline ( ctx context . Context , nodeID storj . NodeID , suspendedAt time . Time ) ( err error )
2022-02-25 10:43:19 +00:00
// TestNodeCountryCode sets node country code.
TestNodeCountryCode ( ctx context . Context , nodeID storj . NodeID , countryCode string ) ( err error )
2022-11-22 13:02:01 +00:00
// TestUpdateCheckInDirectUpdate tries to update a node info directly. Returns true if it succeeded, false if there were no node with the provided (used for testing).
TestUpdateCheckInDirectUpdate ( ctx context . Context , node NodeCheckInInfo , timestamp time . Time , semVer version . SemVer , walletFeatures string ) ( updated bool , err error )
2023-03-09 16:28:16 +00:00
// OneTimeFixLastNets updates the last_net values for all node records to be equal to their
// last_ip_port values.
OneTimeFixLastNets ( ctx context . Context ) error
2021-02-18 15:33:49 +00:00
2022-06-26 02:58:30 +01:00
// IterateAllContactedNodes will call cb on all known nodes (used in restore trash contexts).
2023-06-30 11:35:07 +01:00
IterateAllContactedNodes ( context . Context , func ( context . Context , * uploadselection . SelectedNode ) error ) error
2022-06-26 02:58:30 +01:00
// IterateAllNodeDossiers will call cb on all known nodes (used for invoice generation).
2021-03-01 20:04:00 +00:00
IterateAllNodeDossiers ( context . Context , func ( context . Context , * NodeDossier ) error ) error
2023-06-30 11:35:16 +01:00
// UpdateNodeTags insert (or refresh) node tags.
UpdateNodeTags ( ctx context . Context , tags uploadselection . NodeTags ) error
// GetNodeTags returns all nodes for a specific node.
GetNodeTags ( ctx context . Context , id storj . NodeID ) ( uploadselection . NodeTags , error )
2019-01-15 16:08:45 +00:00
}
2021-10-25 21:40:41 +01:00
// DisqualificationReason is disqualification reason enum type.
type DisqualificationReason int
const (
// DisqualificationReasonUnknown denotes undetermined disqualification reason.
DisqualificationReasonUnknown DisqualificationReason = 0
// DisqualificationReasonAuditFailure denotes disqualification due to audit score falling below threshold.
DisqualificationReasonAuditFailure DisqualificationReason = 1
// DisqualificationReasonSuspension denotes disqualification due to unknown audit failure after grace period for unknown audits
// has elapsed.
DisqualificationReasonSuspension DisqualificationReason = 2
// DisqualificationReasonNodeOffline denotes disqualification due to node's online score falling below threshold after tracking
// period has elapsed.
DisqualificationReasonNodeOffline DisqualificationReason = 3
)
2020-07-16 15:18:02 +01:00
// NodeCheckInInfo contains all the info that will be updated when a node checkins.
2019-09-19 19:37:31 +01:00
type NodeCheckInInfo struct {
2022-09-30 16:26:24 +01:00
NodeID storj . NodeID
Address * pb . NodeAddress
LastNet string
LastIPPort string
IsUp bool
Operator * pb . NodeOperator
Capacity * pb . NodeCapacity
Version * pb . NodeVersion
CountryCode location . CountryCode
SoftwareUpdateEmailSent bool
VersionBelowMin bool
2019-09-19 19:37:31 +01:00
}
2020-06-16 13:16:55 +01:00
// InfoResponse contains node dossier info requested from the storage node.
type InfoResponse struct {
Type pb . NodeType
Operator * pb . NodeOperator
Capacity * pb . NodeCapacity
Version * pb . NodeVersion
}
2019-03-23 08:06:11 +00:00
// FindStorageNodesRequest defines easy request parameters.
type FindStorageNodesRequest struct {
2021-05-11 09:49:26 +01:00
RequestedCount int
ExcludedIDs [ ] storj . NodeID
MinimumVersion string // semver or empty
AsOfSystemInterval time . Duration // only used for CRDB queries
2021-10-27 09:50:27 +01:00
Placement storj . PlacementConstraint
2019-03-23 08:06:11 +00:00
}
2020-07-16 15:18:02 +01:00
// NodeCriteria are the requirements for selecting nodes.
2019-03-23 08:06:11 +00:00
type NodeCriteria struct {
2021-05-11 09:49:26 +01:00
FreeDisk int64
ExcludedIDs [ ] storj . NodeID
ExcludedNetworks [ ] string // the /24 subnet IPv4 or /64 subnet IPv6 for nodes
MinimumVersion string // semver or empty
OnlineWindow time . Duration
AsOfSystemInterval time . Duration // only used for CRDB queries
2022-02-25 10:43:19 +00:00
ExcludedCountries [ ] string
2019-03-23 08:06:11 +00:00
}
2021-06-17 15:01:21 +01:00
// ReputationStatus indicates current reputation status for a node.
type ReputationStatus struct {
2022-09-29 18:06:57 +01:00
Email string
2021-10-25 21:40:41 +01:00
Disqualified * time . Time
DisqualificationReason * DisqualificationReason
UnknownAuditSuspended * time . Time
OfflineSuspended * time . Time
VettedAt * time . Time
2021-06-17 15:01:21 +01:00
}
2021-10-25 21:40:41 +01:00
// ReputationUpdate contains reputation update data for a node.
type ReputationUpdate struct {
Disqualified * time . Time
DisqualificationReason DisqualificationReason
UnknownAuditSuspended * time . Time
OfflineSuspended * time . Time
VettedAt * time . Time
2021-06-23 00:09:39 +01:00
}
2019-10-11 22:18:05 +01:00
// ExitStatus is used for reading graceful exit status.
type ExitStatus struct {
NodeID storj . NodeID
ExitInitiatedAt * time . Time
ExitLoopCompletedAt * time . Time
ExitFinishedAt * time . Time
2019-10-17 16:01:39 +01:00
ExitSuccess bool
2019-10-11 22:18:05 +01:00
}
2019-10-01 23:18:21 +01:00
// ExitStatusRequest is used to update a node's graceful exit status.
type ExitStatusRequest struct {
NodeID storj . NodeID
ExitInitiatedAt time . Time
ExitLoopCompletedAt time . Time
ExitFinishedAt time . Time
2019-10-17 16:01:39 +01:00
ExitSuccess bool
2019-10-01 23:18:21 +01:00
}
2020-07-16 15:18:02 +01:00
// NodeDossier is the complete info that the satellite tracks for a storage node.
2019-04-04 17:34:36 +01:00
type NodeDossier struct {
pb . Node
2022-09-30 16:26:24 +01:00
Type pb . NodeType
Operator pb . NodeOperator
Capacity pb . NodeCapacity
Reputation NodeStats
Version pb . NodeVersion
Contained bool
Disqualified * time . Time
DisqualificationReason * DisqualificationReason
UnknownAuditSuspended * time . Time
OfflineSuspended * time . Time
OfflineUnderReview * time . Time
PieceCount int64
ExitStatus ExitStatus
CreatedAt time . Time
LastNet string
LastIPPort string
2022-10-07 21:24:43 +01:00
LastOfflineEmail * time . Time
2022-09-30 16:26:24 +01:00
LastSoftwareUpdateEmail * time . Time
2022-10-07 21:24:43 +01:00
CountryCode location . CountryCode
2019-04-04 17:34:36 +01:00
}
2019-03-29 08:53:43 +00:00
// NodeStats contains statistics about a node.
2019-03-25 22:25:09 +00:00
type NodeStats struct {
2021-11-08 20:51:04 +00:00
Latency90 int64
LastContactSuccess time . Time
LastContactFailure time . Time
OfflineUnderReview * time . Time
Status ReputationStatus
2019-03-25 22:25:09 +00:00
}
2020-07-16 15:18:02 +01:00
// NodeLastContact contains the ID, address, and timestamp.
2020-01-02 20:41:18 +00:00
type NodeLastContact struct {
2020-05-19 17:42:00 +01:00
URL storj . NodeURL
2020-03-06 22:04:23 +00:00
LastIPPort string
2020-01-02 20:41:18 +00:00
LastContactSuccess time . Time
LastContactFailure time . Time
}
2021-11-08 20:51:04 +00:00
// NodeReputation is used as a result for creating orders limits for audits.
type NodeReputation struct {
ID storj . NodeID
Address * pb . NodeAddress
LastNet string
LastIPPort string
Reputation ReputationStatus
}
2020-12-05 16:01:42 +00:00
// Service is used to store and handle node information.
2019-09-10 14:24:16 +01:00
//
// architecture: Service
2019-08-06 17:35:59 +01:00
type Service struct {
2022-09-28 20:53:48 +01:00
log * zap . Logger
db DB
2022-10-31 21:33:17 +00:00
nodeEvents nodeevents . DB
2022-09-28 20:53:48 +01:00
satelliteName string
satelliteAddress string
config Config
2021-01-28 14:33:53 +00:00
2021-10-29 18:44:44 +01:00
GeoIP geoip . IPToCountry
2021-01-28 14:33:53 +00:00
UploadSelectionCache * UploadSelectionCache
DownloadSelectionCache * DownloadSelectionCache
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
LastNetFunc LastNetFunc
2018-04-18 16:34:15 +01:00
}
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
// LastNetFunc is the type of a function that will be used to derive a network from an ip and port.
type LastNetFunc func ( config NodeSelectionConfig , ip net . IP , port string ) ( string , error )
2020-07-16 15:18:02 +01:00
// NewService returns a new Service.
2023-03-08 12:25:10 +00:00
func NewService ( log * zap . Logger , db DB , nodeEvents nodeevents . DB , satelliteAddr , satelliteName string , config Config ) ( * Service , error ) {
2021-10-29 18:44:44 +01:00
err := config . Node . AsOfSystemTime . isValid ( )
if err != nil {
2022-06-28 12:53:39 +01:00
return nil , errs . Wrap ( err )
2020-12-22 19:07:07 +00:00
}
2021-10-29 18:44:44 +01:00
var geoIP geoip . IPToCountry = geoip . NewMockIPToCountry ( config . GeoIP . MockCountries )
if config . GeoIP . DB != "" {
geoIP , err = geoip . OpenMaxmindDB ( config . GeoIP . DB )
if err != nil {
return nil , Error . Wrap ( err )
}
}
2023-06-30 11:13:18 +01:00
defaultSelection := uploadselection . NodeFilters { }
if len ( config . Node . UploadExcludedCountryCodes ) > 0 {
defaultSelection = defaultSelection . WithCountryFilter ( func ( code location . CountryCode ) bool {
for _ , nodeCountry := range config . Node . UploadExcludedCountryCodes {
if nodeCountry == code . String ( ) {
return false
}
}
return true
} )
}
// TODO: this supposed to be configurable
placementRules := NewPlacementRules ( )
placementRules . AddLegacyStaticRules ( )
2022-06-28 12:53:39 +01:00
uploadSelectionCache , err := NewUploadSelectionCache ( log , db ,
config . NodeSelectionCache . Staleness , config . Node ,
2023-06-30 11:13:18 +01:00
defaultSelection , placementRules . CreateFilters ,
2022-06-28 12:53:39 +01:00
)
if err != nil {
return nil , errs . Wrap ( err )
}
downloadSelectionCache , err := NewDownloadSelectionCache ( log , db , DownloadSelectionCacheConfig {
Staleness : config . NodeSelectionCache . Staleness ,
OnlineWindow : config . Node . OnlineWindow ,
AsOfSystemTime : config . Node . AsOfSystemTime ,
} )
if err != nil {
return nil , errs . Wrap ( err )
}
2019-08-06 17:35:59 +01:00
return & Service {
2022-09-28 20:53:48 +01:00
log : log ,
db : db ,
2022-10-31 21:33:17 +00:00
nodeEvents : nodeEvents ,
2022-09-28 20:53:48 +01:00
satelliteAddress : satelliteAddr ,
satelliteName : satelliteName ,
config : config ,
2021-01-28 14:33:53 +00:00
2021-10-29 18:44:44 +01:00
GeoIP : geoIP ,
2022-06-28 12:53:39 +01:00
UploadSelectionCache : uploadSelectionCache ,
DownloadSelectionCache : downloadSelectionCache ,
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
LastNetFunc : MaskOffLastNet ,
2020-12-22 19:07:07 +00:00
} , nil
2018-12-20 13:57:54 +00:00
}
2022-06-28 12:53:39 +01:00
// Run runs the background processes needed for caches.
func ( service * Service ) Run ( ctx context . Context ) error {
return errs . Combine ( sync2 . Concurrently (
func ( ) error { return service . UploadSelectionCache . Run ( ctx ) } ,
func ( ) error { return service . DownloadSelectionCache . Run ( ctx ) } ,
) ... )
}
2020-07-16 15:18:02 +01:00
// Close closes resources.
2021-10-29 18:44:44 +01:00
func ( service * Service ) Close ( ) error {
return service . GeoIP . Close ( )
}
2019-01-18 13:54:08 +00:00
2019-08-06 17:35:59 +01:00
// Get looks up the provided nodeID from the overlay.
func ( service * Service ) Get ( ctx context . Context , nodeID storj . NodeID ) ( _ * NodeDossier , err error ) {
2019-03-23 08:06:11 +00:00
defer mon . Task ( ) ( & ctx ) ( & err )
2018-12-17 18:47:26 +00:00
if nodeID . IsZero ( ) {
return nil , ErrEmptyNode
}
2019-08-06 17:35:59 +01:00
return service . db . Get ( ctx , nodeID )
2018-04-18 16:34:15 +01:00
}
2022-06-22 21:26:53 +01:00
// CachedGetOnlineNodesForGet returns a map of nodes from the download selection cache from the suppliedIDs.
2023-06-30 11:35:07 +01:00
func ( service * Service ) CachedGetOnlineNodesForGet ( ctx context . Context , nodeIDs [ ] storj . NodeID ) ( _ map [ storj . NodeID ] * uploadselection . SelectedNode , err error ) {
2022-06-22 21:26:53 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
return service . DownloadSelectionCache . GetNodes ( ctx , nodeIDs )
}
2021-11-08 20:51:04 +00:00
// GetOnlineNodesForAuditRepair returns a map of nodes for the supplied nodeIDs.
func ( service * Service ) GetOnlineNodesForAuditRepair ( ctx context . Context , nodeIDs [ ] storj . NodeID ) ( _ map [ storj . NodeID ] * NodeReputation , err error ) {
defer mon . Task ( ) ( & ctx ) ( & err )
return service . db . GetOnlineNodesForAuditRepair ( ctx , nodeIDs , service . config . Node . OnlineWindow )
}
2023-06-07 12:58:25 +01:00
// GetNodeIPsFromPlacement returns a map of node ip:port for the supplied nodeIDs. Results are filtered out by placement.
func ( service * Service ) GetNodeIPsFromPlacement ( ctx context . Context , nodeIDs [ ] storj . NodeID , placement storj . PlacementConstraint ) ( _ map [ storj . NodeID ] string , err error ) {
2021-01-13 13:59:05 +00:00
defer mon . Task ( ) ( & ctx ) ( & err )
2023-06-07 12:58:25 +01:00
return service . DownloadSelectionCache . GetNodeIPsFromPlacement ( ctx , nodeIDs , placement )
2021-01-13 13:59:05 +00:00
}
2019-04-23 23:45:50 +01:00
// IsOnline checks if a node is 'online' based on the collected statistics.
2019-08-06 17:35:59 +01:00
func ( service * Service ) IsOnline ( node * NodeDossier ) bool {
2019-11-15 22:43:06 +00:00
return time . Since ( node . Reputation . LastContactSuccess ) < service . config . Node . OnlineWindow
2019-04-23 23:45:50 +01:00
}
2020-05-06 14:05:31 +01:00
// FindStorageNodesForGracefulExit searches the overlay network for nodes that meet the provided requirements for graceful-exit requests.
2023-06-30 11:35:07 +01:00
func ( service * Service ) FindStorageNodesForGracefulExit ( ctx context . Context , req FindStorageNodesRequest ) ( _ [ ] * uploadselection . SelectedNode , err error ) {
2020-04-24 17:11:04 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2021-10-25 11:14:59 +01:00
return service . UploadSelectionCache . GetNodes ( ctx , req )
2020-05-06 14:05:31 +01:00
}
// FindStorageNodesForUpload searches the overlay network for nodes that meet the provided requirements for upload.
//
// When enabled it uses the cache to select nodes.
// When the node selection from the cache fails, it falls back to the old implementation.
2023-06-30 11:35:07 +01:00
func ( service * Service ) FindStorageNodesForUpload ( ctx context . Context , req FindStorageNodesRequest ) ( _ [ ] * uploadselection . SelectedNode , err error ) {
2020-05-06 14:05:31 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2020-12-22 19:07:07 +00:00
if service . config . Node . AsOfSystemTime . Enabled && service . config . Node . AsOfSystemTime . DefaultInterval < 0 {
2021-05-11 09:49:26 +01:00
req . AsOfSystemInterval = service . config . Node . AsOfSystemTime . DefaultInterval
2020-12-22 19:07:07 +00:00
}
2022-02-25 16:53:24 +00:00
// TODO excluding country codes on upload if cache is disabled is not implemented
2020-05-06 14:05:31 +01:00
if service . config . NodeSelectionCache . Disabled {
return service . FindStorageNodesWithPreferences ( ctx , req , & service . config . Node )
}
2021-01-28 11:46:18 +00:00
selectedNodes , err := service . UploadSelectionCache . GetNodes ( ctx , req )
2020-04-24 17:11:04 +01:00
if err != nil {
2021-10-27 09:50:27 +01:00
return selectedNodes , err
2020-04-24 17:11:04 +01:00
}
if len ( selectedNodes ) < req . RequestedCount {
2021-10-27 09:50:27 +01:00
excludedIDs := make ( [ ] string , 0 )
for _ , e := range req . ExcludedIDs {
excludedIDs = append ( excludedIDs , e . String ( ) )
}
service . log . Warn ( "Not enough nodes are available from Node Cache" ,
zap . String ( "minVersion" , req . MinimumVersion ) ,
zap . Strings ( "excludedIDs" , excludedIDs ) ,
zap . Duration ( "asOfSystemInterval" , req . AsOfSystemInterval ) ,
zap . Int ( "requested" , req . RequestedCount ) ,
zap . Int ( "available" , len ( selectedNodes ) ) ,
zap . Uint16 ( "placement" , uint16 ( req . Placement ) ) )
2020-04-24 17:11:04 +01:00
}
2021-10-27 09:50:27 +01:00
return selectedNodes , err
2020-04-24 17:11:04 +01:00
}
2020-05-06 14:05:31 +01:00
// FindStorageNodesWithPreferences searches the overlay network for nodes that meet the provided criteria.
//
// This does not use a cache.
2023-06-30 11:35:07 +01:00
func ( service * Service ) FindStorageNodesWithPreferences ( ctx context . Context , req FindStorageNodesRequest , preferences * NodeSelectionConfig ) ( nodes [ ] * uploadselection . SelectedNode , err error ) {
2019-03-23 08:06:11 +00:00
defer mon . Task ( ) ( & ctx ) ( & err )
2019-01-31 18:49:00 +00:00
// TODO: add sanity limits to requested node count
// TODO: add sanity limits to excluded nodes
2020-05-07 12:54:48 +01:00
totalNeededNodes := req . RequestedCount
2019-01-31 18:49:00 +00:00
2020-03-12 18:37:57 +00:00
excludedIDs := req . ExcludedIDs
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
// keep track of the network to make sure we only select nodes from different networks
2020-03-12 18:37:57 +00:00
var excludedNetworks [ ] string
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
if len ( excludedIDs ) > 0 {
2020-03-12 18:37:57 +00:00
excludedNetworks , err = service . db . GetNodesNetwork ( ctx , excludedIDs )
2019-11-06 21:38:52 +00:00
if err != nil {
return nil , Error . Wrap ( err )
}
}
2019-04-23 16:23:51 +01:00
newNodeCount := 0
2020-03-18 21:16:13 +00:00
if preferences . NewNodeFraction > 0 {
2020-04-09 16:19:44 +01:00
newNodeCount = int ( float64 ( totalNeededNodes ) * preferences . NewNodeFraction )
2019-04-23 16:23:51 +01:00
}
2019-05-07 15:44:47 +01:00
criteria := NodeCriteria {
2021-05-11 09:49:26 +01:00
FreeDisk : preferences . MinimumDiskSpace . Int64 ( ) ,
ExcludedIDs : excludedIDs ,
ExcludedNetworks : excludedNetworks ,
MinimumVersion : preferences . MinimumVersion ,
OnlineWindow : preferences . OnlineWindow ,
AsOfSystemInterval : req . AsOfSystemInterval ,
2019-05-07 15:44:47 +01:00
}
2020-04-09 16:19:44 +01:00
nodes , err = service . db . SelectStorageNodes ( ctx , totalNeededNodes , newNodeCount , & criteria )
2019-01-31 18:49:00 +00:00
if err != nil {
2019-08-06 17:35:59 +01:00
return nil , Error . Wrap ( err )
2019-01-31 18:49:00 +00:00
}
2020-04-09 16:19:44 +01:00
if len ( nodes ) < totalNeededNodes {
return nodes , ErrNotEnoughNodes . New ( "requested %d found %d; %+v " , totalNeededNodes , len ( nodes ) , criteria )
2019-01-31 18:49:00 +00:00
}
return nodes , nil
}
2022-10-07 21:28:51 +01:00
// InsertOfflineNodeEvents inserts offline events into node events.
func ( service * Service ) InsertOfflineNodeEvents ( ctx context . Context , cooldown time . Duration , cutoff time . Duration , limit int ) ( count int , err error ) {
defer mon . Task ( ) ( & ctx ) ( & err )
if ! service . config . SendNodeEmails {
return 0 , nil
}
nodes , err := service . db . GetOfflineNodesForEmail ( ctx , service . config . Node . OnlineWindow , cutoff , cooldown , limit )
if err != nil {
return 0 , err
}
count = len ( nodes )
var successful storj . NodeIDList
for id , email := range nodes {
_ , err = service . nodeEvents . Insert ( ctx , email , id , nodeevents . Offline )
if err != nil {
service . log . Error ( "could not insert node offline into node events" , zap . Error ( err ) )
} else {
successful = append ( successful , id )
}
}
if len ( successful ) > 0 {
err = service . db . UpdateLastOfflineEmail ( ctx , successful , time . Now ( ) )
if err != nil {
return count , err
}
}
return count , err
}
2022-03-03 00:23:11 +00:00
// KnownReliableInExcludedCountries filters healthy nodes that are in excluded countries.
func ( service * Service ) KnownReliableInExcludedCountries ( ctx context . Context , nodeIds storj . NodeIDList ) ( reliableInExcluded storj . NodeIDList , err error ) {
defer mon . Task ( ) ( & ctx ) ( & err )
criteria := & NodeCriteria {
OnlineWindow : service . config . Node . OnlineWindow ,
ExcludedCountries : service . config . RepairExcludedCountryCodes ,
}
return service . db . KnownReliableInExcludedCountries ( ctx , criteria , nodeIds )
}
2019-12-16 13:45:13 +00:00
// KnownReliable filters a set of nodes to reliable (online and qualified) nodes.
2023-06-30 11:35:07 +01:00
func ( service * Service ) KnownReliable ( ctx context . Context , nodeIDs storj . NodeIDList ) ( onlineNodes [ ] uploadselection . SelectedNode , offlineNodes [ ] uploadselection . SelectedNode , err error ) {
2019-12-16 13:45:13 +00:00
defer mon . Task ( ) ( & ctx ) ( & err )
2023-06-26 09:25:13 +01:00
// TODO add as of system time
return service . db . KnownReliable ( ctx , nodeIDs , service . config . Node . OnlineWindow , 0 )
2019-12-16 13:45:13 +00:00
}
2023-06-29 14:26:52 +01:00
// Reliable returns all nodes that are reliable (separated by whether they are currently online or offline).
2023-06-29 09:38:47 +01:00
func ( service * Service ) Reliable ( ctx context . Context ) ( online [ ] uploadselection . SelectedNode , offline [ ] uploadselection . SelectedNode , err error ) {
2019-07-08 23:04:35 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2022-02-25 10:43:19 +00:00
2023-06-29 14:26:52 +01:00
// TODO add as of system tim.
2023-06-29 09:38:47 +01:00
return service . db . Reliable ( ctx , service . config . Node . OnlineWindow , 0 )
2019-07-08 23:04:35 +01:00
}
2021-06-17 15:01:21 +01:00
// UpdateReputation updates the DB columns for any of the reputation fields.
2022-09-29 18:23:14 +01:00
func ( service * Service ) UpdateReputation ( ctx context . Context , id storj . NodeID , email string , request ReputationUpdate , reputationChanges [ ] nodeevents . Type ) ( err error ) {
2021-08-05 12:07:45 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2022-09-29 18:23:14 +01:00
err = service . db . UpdateReputation ( ctx , id , request )
if err != nil {
return err
}
if service . config . SendNodeEmails {
service . insertReputationNodeEvents ( ctx , email , id , reputationChanges )
}
return nil
2021-06-17 15:01:21 +01:00
}
2019-04-10 07:04:24 +01:00
// UpdateNodeInfo updates node dossier with info requested from the node itself like node type, email, wallet, capacity, and version.
2020-06-16 13:16:55 +01:00
func ( service * Service ) UpdateNodeInfo ( ctx context . Context , node storj . NodeID , nodeInfo * InfoResponse ) ( stats * NodeDossier , err error ) {
2019-03-25 22:25:09 +00:00
defer mon . Task ( ) ( & ctx ) ( & err )
2019-08-06 17:35:59 +01:00
return service . db . UpdateNodeInfo ( ctx , node , nodeInfo )
2019-03-25 22:25:09 +00:00
}
2022-11-22 00:10:27 +00:00
// SetNodeContained updates the contained field for the node record. If
// `contained` is true, the contained field in the record is set to the current
// database time, if it is not already set. If `contained` is false, the
// contained field in the record is set to NULL. All other fields are left
// alone.
func ( service * Service ) SetNodeContained ( ctx context . Context , node storj . NodeID , contained bool ) ( err error ) {
defer mon . Task ( ) ( & ctx ) ( & err )
return service . db . SetNodeContained ( ctx , node , contained )
}
2021-08-02 18:48:55 +01:00
// UpdateCheckIn updates a single storagenode's check-in info if needed.
2021-08-05 14:07:14 +01:00
/ *
The check - in info is updated in the database if :
2023-02-15 16:41:26 +00:00
( 1 ) there is no previous entry and the node is allowed ( id difficulty , etc ) ;
2021-08-05 14:07:14 +01:00
( 2 ) it has been too long since the last known entry ; or
( 3 ) the node hostname , IP address , port , wallet , sw version , or disk capacity
has changed .
Note that there can be a race between acquiring the previous entry and
performing the update , so if two updates happen at about the same time it is
not defined which one will end up in the database .
* /
2019-11-15 22:43:06 +00:00
func ( service * Service ) UpdateCheckIn ( ctx context . Context , node NodeCheckInInfo , timestamp time . Time ) ( err error ) {
2019-09-19 19:37:31 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2021-10-29 18:44:44 +01:00
failureMeter := mon . Meter ( "geofencing_lookup_failed" )
2021-06-15 17:32:12 +01:00
oldInfo , err := service . Get ( ctx , node . NodeID )
if err != nil && ! ErrNodeNotFound . Has ( err ) {
return Error . New ( "failed to get node info from DB" )
}
if oldInfo == nil {
2022-08-23 23:21:33 +01:00
if ! node . IsUp {
// this is a previously unknown node, and we couldn't pingback to verify that it even
// exists. Don't bother putting it in the db.
return nil
}
2023-02-15 16:41:26 +00:00
difficulty , err := node . NodeID . Difficulty ( )
if err != nil {
// this should never happen
return err
}
if int ( difficulty ) < service . config . MinimumNewNodeIDDifficulty {
return ErrLowDifficulty . New ( "node id difficulty is %d when %d is the minimum" ,
difficulty , service . config . MinimumNewNodeIDDifficulty )
}
2021-10-29 18:44:44 +01:00
node . CountryCode , err = service . GeoIP . LookupISOCountryCode ( node . LastIPPort )
if err != nil {
failureMeter . Mark ( 1 )
service . log . Debug ( "failed to resolve country code for node" ,
zap . String ( "node address" , node . Address . Address ) ,
zap . Stringer ( "Node ID" , node . NodeID ) ,
zap . Error ( err ) )
}
2021-06-15 17:32:12 +01:00
return service . db . UpdateCheckIn ( ctx , node , timestamp , service . config . Node )
}
lastUp , lastDown := oldInfo . Reputation . LastContactSuccess , oldInfo . Reputation . LastContactFailure
lastContact := lastUp
if lastContact . Before ( lastDown ) {
lastContact = lastDown
}
dbStale := lastContact . Add ( service . config . NodeCheckInWaitPeriod ) . Before ( timestamp ) ||
( node . IsUp && lastUp . Before ( lastDown ) ) || ( ! node . IsUp && lastDown . Before ( lastUp ) )
2022-12-22 20:28:53 +00:00
addrChanged := ! pb . AddressEqual ( node . Address , oldInfo . Address )
2021-06-15 17:32:12 +01:00
walletChanged := ( node . Operator == nil && oldInfo . Operator . Wallet != "" ) ||
( node . Operator != nil && oldInfo . Operator . Wallet != node . Operator . Wallet )
verChanged := ( node . Version == nil && oldInfo . Version . Version != "" ) ||
( node . Version != nil && oldInfo . Version . Version != node . Version . Version )
spaceChanged := ( node . Capacity == nil && oldInfo . Capacity . FreeDisk != 0 ) ||
( node . Capacity != nil && node . Capacity . FreeDisk != oldInfo . Capacity . FreeDisk )
2023-03-21 15:04:17 +00:00
node . CountryCode , err = service . GeoIP . LookupISOCountryCode ( node . LastIPPort )
if err != nil {
failureMeter . Mark ( 1 )
service . log . Debug ( "failed to resolve country code for node" ,
zap . String ( "node address" , node . Address . Address ) ,
zap . Stringer ( "Node ID" , node . NodeID ) ,
zap . Error ( err ) )
2021-10-29 18:44:44 +01:00
}
2022-09-30 16:26:24 +01:00
if service . config . SendNodeEmails && service . config . Node . MinimumVersion != "" {
min , err := version . NewSemVer ( service . config . Node . MinimumVersion )
if err != nil {
return err
}
v , err := version . NewSemVer ( node . Version . GetVersion ( ) )
if err != nil {
return err
}
if v . Compare ( min ) == - 1 {
node . VersionBelowMin = true
if oldInfo . LastSoftwareUpdateEmail == nil ||
2023-02-01 22:19:02 +00:00
oldInfo . LastSoftwareUpdateEmail . Add ( service . config . NodeSoftwareUpdateEmailCooldown ) . Before ( timestamp ) {
2022-09-30 16:26:24 +01:00
_ , err = service . nodeEvents . Insert ( ctx , node . Operator . Email , node . NodeID , nodeevents . BelowMinVersion )
if err != nil {
service . log . Error ( "could not insert node software below minimum version into node events" , zap . Error ( err ) )
} else {
node . SoftwareUpdateEmailSent = true
}
}
}
}
2021-06-15 17:32:12 +01:00
if dbStale || addrChanged || walletChanged || verChanged || spaceChanged ||
2021-10-29 18:44:44 +01:00
oldInfo . LastNet != node . LastNet || oldInfo . LastIPPort != node . LastIPPort ||
2022-09-30 16:26:24 +01:00
oldInfo . CountryCode != node . CountryCode || node . SoftwareUpdateEmailSent {
2022-09-28 20:53:48 +01:00
err = service . db . UpdateCheckIn ( ctx , node , timestamp , service . config . Node )
if err != nil {
2022-11-01 14:54:46 +00:00
return Error . Wrap ( err )
2022-09-28 20:53:48 +01:00
}
2022-11-01 14:54:46 +00:00
if service . config . SendNodeEmails && node . IsUp && oldInfo . Reputation . LastContactSuccess . Add ( service . config . Node . OnlineWindow ) . Before ( timestamp ) {
_ , err = service . nodeEvents . Insert ( ctx , node . Operator . Email , node . NodeID , nodeevents . Online )
return Error . Wrap ( err )
2022-09-28 20:53:48 +01:00
}
return nil
2021-06-15 17:32:12 +01:00
}
2021-08-02 18:48:55 +01:00
2021-06-15 17:32:12 +01:00
service . log . Debug ( "ignoring unnecessary check-in" ,
zap . String ( "node address" , node . Address . Address ) ,
zap . Stringer ( "Node ID" , node . NodeID ) )
2021-08-02 18:48:55 +01:00
mon . Event ( "unnecessary_node_check_in" )
2021-06-15 17:32:12 +01:00
return nil
2019-09-19 19:37:31 +01:00
}
2022-03-03 00:23:11 +00:00
// GetMissingPieces returns the list of offline nodes and the corresponding pieces.
2020-12-14 14:29:48 +00:00
func ( service * Service ) GetMissingPieces ( ctx context . Context , pieces metabase . Pieces ) ( missingPieces [ ] uint16 , err error ) {
2019-06-04 12:36:27 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2023-06-26 09:25:13 +01:00
// TODO this method will be removed completely in subsequent change
2019-05-16 14:49:10 +01:00
var nodeIDs storj . NodeIDList
2023-06-26 09:25:13 +01:00
missingPiecesMap := map [ storj . NodeID ] uint16 { }
2019-05-16 14:49:10 +01:00
for _ , p := range pieces {
2020-12-14 14:29:48 +00:00
nodeIDs = append ( nodeIDs , p . StorageNode )
2023-06-26 09:25:13 +01:00
missingPiecesMap [ p . StorageNode ] = p . Number
2019-05-16 14:49:10 +01:00
}
2023-06-26 09:25:13 +01:00
onlineNodes , _ , err := service . KnownReliable ( ctx , nodeIDs )
2019-05-16 14:49:10 +01:00
if err != nil {
return nil , Error . New ( "error getting nodes %s" , err )
}
2023-06-26 09:25:13 +01:00
for _ , node := range onlineNodes {
delete ( missingPiecesMap , node . ID )
2019-05-16 14:49:10 +01:00
}
2023-06-26 09:25:13 +01:00
return maps . Values ( missingPiecesMap ) , nil
2019-05-16 14:49:10 +01:00
}
2019-05-30 18:35:04 +01:00
2022-03-03 00:23:11 +00:00
// GetReliablePiecesInExcludedCountries returns the list of pieces held by nodes located in excluded countries.
func ( service * Service ) GetReliablePiecesInExcludedCountries ( ctx context . Context , pieces metabase . Pieces ) ( piecesInExcluded [ ] uint16 , err error ) {
defer mon . Task ( ) ( & ctx ) ( & err )
var nodeIDs storj . NodeIDList
for _ , p := range pieces {
nodeIDs = append ( nodeIDs , p . StorageNode )
}
inExcluded , err := service . KnownReliableInExcludedCountries ( ctx , nodeIDs )
if err != nil {
return nil , Error . New ( "error getting nodes %s" , err )
}
for _ , p := range pieces {
for _ , nodeID := range inExcluded {
if nodeID == p . StorageNode {
piecesInExcluded = append ( piecesInExcluded , p . Number )
}
}
}
return piecesInExcluded , nil
}
2022-10-12 18:56:15 +01:00
// DQNodesLastSeenBefore disqualifies nodes who have not been contacted since the cutoff time.
func ( service * Service ) DQNodesLastSeenBefore ( ctx context . Context , cutoff time . Time , limit int ) ( count int , err error ) {
defer mon . Task ( ) ( & ctx ) ( & err )
nodes , count , err := service . db . DQNodesLastSeenBefore ( ctx , cutoff , limit )
if err != nil {
return 0 , err
}
if service . config . SendNodeEmails {
for nodeID , email := range nodes {
_ , err = service . nodeEvents . Insert ( ctx , email , nodeID , nodeevents . Disqualified )
if err != nil {
service . log . Error ( "could not insert node disqualified into node events" , zap . Error ( err ) )
}
}
}
return count , err
}
2020-01-03 19:11:47 +00:00
// DisqualifyNode disqualifies a storage node.
2021-10-27 11:58:29 +01:00
func ( service * Service ) DisqualifyNode ( ctx context . Context , nodeID storj . NodeID , reason DisqualificationReason ) ( err error ) {
2020-01-03 19:11:47 +00:00
defer mon . Task ( ) ( & ctx ) ( & err )
2022-10-11 17:13:29 +01:00
email , err := service . db . DisqualifyNode ( ctx , nodeID , time . Now ( ) . UTC ( ) , reason )
if err != nil {
return err
}
if service . config . SendNodeEmails {
_ , err = service . nodeEvents . Insert ( ctx , email , nodeID , nodeevents . Disqualified )
if err != nil {
service . log . Error ( "could not insert node disqualified into node events" )
}
}
return nil
2020-01-03 19:11:47 +00:00
}
2022-09-16 16:35:19 +01:00
// SelectAllStorageNodesDownload returns a nodes that are ready for downloading.
2023-06-30 11:35:07 +01:00
func ( service * Service ) SelectAllStorageNodesDownload ( ctx context . Context , onlineWindow time . Duration , asOf AsOfSystemTimeConfig ) ( _ [ ] * uploadselection . SelectedNode , err error ) {
2022-09-16 16:35:19 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
return service . db . SelectAllStorageNodesDownload ( ctx , onlineWindow , asOf )
}
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
// ResolveIPAndNetwork resolves the target address and determines its IP and appropriate subnet IPv4 or subnet IPv6.
func ( service * Service ) ResolveIPAndNetwork ( ctx context . Context , target string ) ( ip net . IP , port , network string , err error ) {
// LastNetFunc is MaskOffLastNet, unless changed for a test.
return ResolveIPAndNetwork ( ctx , target , service . config . Node , service . LastNetFunc )
}
2023-06-30 11:35:16 +01:00
// UpdateNodeTags persists all new and old node tags.
func ( service * Service ) UpdateNodeTags ( ctx context . Context , tags [ ] uploadselection . NodeTag ) error {
return service . db . UpdateNodeTags ( ctx , tags )
}
// GetNodeTags returns the node tags of a node.
func ( service * Service ) GetNodeTags ( ctx context . Context , id storj . NodeID ) ( uploadselection . NodeTags , error ) {
return service . db . GetNodeTags ( ctx , id )
}
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
// ResolveIPAndNetwork resolves the target address and determines its IP and appropriate last_net, as indicated.
func ResolveIPAndNetwork ( ctx context . Context , target string , config NodeSelectionConfig , lastNetFunc LastNetFunc ) ( ip net . IP , port , network string , err error ) {
2019-06-24 16:33:18 +01:00
defer mon . Task ( ) ( & ctx ) ( & err )
2020-03-06 22:04:23 +00:00
host , port , err := net . SplitHostPort ( target )
if err != nil {
2022-05-27 04:44:48 +01:00
return nil , "" , "" , err
2020-03-06 22:04:23 +00:00
}
ipAddr , err := net . ResolveIPAddr ( "ip" , host )
2019-05-30 18:35:04 +01:00
if err != nil {
2022-05-27 04:44:48 +01:00
return nil , "" , "" , err
2019-05-30 18:35:04 +01:00
}
2019-06-24 16:33:18 +01:00
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
network , err = lastNetFunc ( config , ipAddr . IP , port )
if err != nil {
return nil , "" , "" , err
2019-06-24 16:33:18 +01:00
}
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
return ipAddr . IP , port , network , nil
}
// MaskOffLastNet truncates the target address to the configured CIDR ipv6Cidr or ipv6Cidr prefix,
// if DistinctIP is enabled in the config. Otherwise, it returns the joined IP and port.
func MaskOffLastNet ( config NodeSelectionConfig , addr net . IP , port string ) ( string , error ) {
if config . DistinctIP {
// Filter all IPv4 Addresses into /24 subnets, and filter all IPv6 Addresses into /64 subnets
return truncateIPToNet ( addr , config . NetworkPrefixIPv4 , config . NetworkPrefixIPv6 )
2019-06-24 16:33:18 +01:00
}
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
// The "network" here will be the full IP and port; that is, every node will be considered to
// be on a separate network, even if they all come from one IP (such as localhost).
return net . JoinHostPort ( addr . String ( ) , port ) , nil
}
2019-06-24 16:33:18 +01:00
satellite/overlay: configurable meaning of last_net
Up to now, we have been implementing the DistinctIP preference with code
in two places:
1. On check-in, the last_net is determined by taking the /24 or /64
(in ResolveIPAndNetwork()) and we store it with the node record.
2. On node selection, a preference parameter defines whether to return
results that are distinct on last_net.
It can be observed that we have never yet had the need to switch from
DistinctIP to !DistinctIP, or from !DistinctIP to DistinctIP, on the
same satellite, and we will probably never need to do so in an automated
way. It can also be observed that this arrangement makes tests more
complicated, because we often have to arrange for test nodes to have IP
addresses in different /24 networks (a particular pain on macOS).
Those two considerations, plus some pending work on the repair framework
that will make repair take last_net into consideration, motivate this
change.
With this change, in the #2 place, we will _always_ return results that
are distinct on last_net. We implement the DistinctIP preference, then,
by making the #1 place (ResolveIPAndNetwork()) more flexible. When
DistinctIP is enabled, last_net will be calculated as it was before. But
when DistinctIP is _off_, last_net can be the same as address (IP and
port). That will effectively implement !DistinctIP because every
record will have a distinct last_net already.
As a side effect, this flexibility will allow us to change the rules
about last_net construction arbitrarily. We can do tests where last_net
is set to the source IP, or to a /30 prefix, or a /16 prefix, etc., and
be able to exercise the production logic without requiring a virtual
network bridge.
This change should be safe to make without any migration code, because
all known production satellite deployments use DistinctIP, and the
associated last_net values will not change for them. They will only
change for satellites with !DistinctIP, which are mostly test
deployments that can be recreated trivially. For those satellites which
are both permanent and !DistinctIP, node selection will suddenly start
acting as though DistinctIP is enabled, until the operator runs a single
SQL update "UPDATE nodes SET last_net = last_ip_port". That can be done
either before or after deploying software with this change.
I also assert that this will not hurt performance for production
deployments. It's true that adding the distinct requirement to node
selection makes things a little slower, but the distinct requirement is
already present for all production deployments, and they will see no
change.
Refs: https://github.com/storj/storj/issues/5391
Change-Id: I0e7e92498c3da768df5b4d5fb213dcd2d4862924
2023-02-28 22:57:39 +00:00
// truncateIPToNet truncates the target address to the given CIDR ipv4Cidr or ipv6Cidr prefix,
// according to which type of IP it is.
func truncateIPToNet ( ipAddr net . IP , ipv4Cidr , ipv6Cidr int ) ( network string , err error ) {
// If addr can be converted to 4byte notation, it is an IPv4 address, else its an IPv6 address
if ipv4 := ipAddr . To4 ( ) ; ipv4 != nil {
mask := net . CIDRMask ( ipv4Cidr , 32 )
return ipv4 . Mask ( mask ) . String ( ) , nil
}
if ipv6 := ipAddr . To16 ( ) ; ipv6 != nil {
mask := net . CIDRMask ( ipv6Cidr , 128 )
return ipv6 . Mask ( mask ) . String ( ) , nil
}
return "" , fmt . Errorf ( "unable to get network for address %s" , ipAddr . String ( ) )
2019-05-30 18:35:04 +01:00
}
2020-07-08 15:28:49 +01:00
// TestVetNode directly sets a node's vetted_at timestamp to make testing easier.
func ( service * Service ) TestVetNode ( ctx context . Context , nodeID storj . NodeID ) ( vettedTime * time . Time , err error ) {
vettedTime , err = service . db . TestVetNode ( ctx , nodeID )
service . log . Warn ( "node vetted" , zap . Stringer ( "node ID" , nodeID ) , zap . Stringer ( "vetted time" , vettedTime ) )
if err != nil {
service . log . Warn ( "error vetting node" , zap . Stringer ( "node ID" , nodeID ) )
return nil , err
}
2021-01-28 11:46:18 +00:00
err = service . UploadSelectionCache . Refresh ( ctx )
2020-07-08 15:28:49 +01:00
service . log . Warn ( "nodecache refresh err" , zap . Error ( err ) )
return vettedTime , err
}
// TestUnvetNode directly sets a node's vetted_at timestamp to null to make testing easier.
func ( service * Service ) TestUnvetNode ( ctx context . Context , nodeID storj . NodeID ) ( err error ) {
err = service . db . TestUnvetNode ( ctx , nodeID )
if err != nil {
service . log . Warn ( "error unvetting node" , zap . Stringer ( "node ID" , nodeID ) , zap . Error ( err ) )
return err
}
2021-01-28 11:46:18 +00:00
err = service . UploadSelectionCache . Refresh ( ctx )
2020-07-08 15:28:49 +01:00
service . log . Warn ( "nodecache refresh err" , zap . Error ( err ) )
return err
}
2022-02-25 10:43:19 +00:00
// TestNodeCountryCode directly sets a node's vetted_at timestamp to null to make testing easier.
func ( service * Service ) TestNodeCountryCode ( ctx context . Context , nodeID storj . NodeID , countryCode string ) ( err error ) {
err = service . db . TestNodeCountryCode ( ctx , nodeID , countryCode )
if err != nil {
service . log . Warn ( "error updating node" , zap . Stringer ( "node ID" , nodeID ) , zap . Error ( err ) )
return err
}
return nil
}
2022-09-29 18:23:14 +01:00
func ( service * Service ) insertReputationNodeEvents ( ctx context . Context , email string , id storj . NodeID , repEvents [ ] nodeevents . Type ) {
defer mon . Task ( ) ( & ctx ) ( nil )
for _ , event := range repEvents {
switch event {
case nodeevents . Disqualified :
_ , err := service . nodeEvents . Insert ( ctx , email , id , nodeevents . Disqualified )
if err != nil {
service . log . Error ( "could not insert node disqualified into node events" , zap . Error ( err ) )
}
case nodeevents . UnknownAuditSuspended :
_ , err := service . nodeEvents . Insert ( ctx , email , id , nodeevents . UnknownAuditSuspended )
if err != nil {
service . log . Error ( "could not insert node unknown audit suspended into node events" , zap . Error ( err ) )
}
case nodeevents . UnknownAuditUnsuspended :
_ , err := service . nodeEvents . Insert ( ctx , email , id , nodeevents . UnknownAuditUnsuspended )
if err != nil {
service . log . Error ( "could not insert node unknown audit unsuspended into node events" , zap . Error ( err ) )
}
case nodeevents . OfflineSuspended :
_ , err := service . nodeEvents . Insert ( ctx , email , id , nodeevents . OfflineSuspended )
if err != nil {
service . log . Error ( "could not insert node offline suspended into node events" , zap . Error ( err ) )
}
case nodeevents . OfflineUnsuspended :
_ , err := service . nodeEvents . Insert ( ctx , email , id , nodeevents . OfflineUnsuspended )
if err != nil {
service . log . Error ( "could not insert node offline unsuspended into node events" , zap . Error ( err ) )
}
default :
}
}
}