The main motivation is to wrap the bucket DB and metainfo DB, so we
could check if a bucket is empty before applying geofencing config.
Change-Id: I8bac21555e01d51a663fb557bc1acfc8106bc2e1
To allow for changing limits for new users, while leaving existing users limits as they are, we must store the project limits for each user. We currently store the limit for the number of projects a user can create in the user DB table. This change would also store the project bandwidth and storage limits in the same table.
Change-Id: If8d79b39de020b969f3445ef2fcc370e51d706c6
Alter satellites DB nodes table to add `disqualification_reason` int column
which contains disqualification type enum of why node has been disqualified.
Change-Id: Ia514557018ca27e1984216dc5004346d59869d16
We had a bug in the stray nodes chore where nodes who had not been seen
in several months were not being DQd. We figured out that this was
happening because we were using two queries: The first to grab
nodes where last_contact_success < some cutoff, the second to DQ them
unless last_contact_success == '0001-01-01 00:00:00+00'. The problem
is that if all of the nodes returned from the first query had
last_contact_success of '0001-01-01 00:00:00+00', we would pass them to
the second query which would not DQ them. This would result in the stray
nodes DQ loop ending since we found a number of nodes to DQ less than the
limit.
The fix: add the "WHERE last_contact_success != '0001-01-01
00:00:00+00'::timestamptz" to the selection query.
Change-Id: I4e60de90b68d8745d641b4467c2b23e0e56f7dff
Even though we want to start charging segment fee instead of object fee,
it's hard for users to understand what a segment is. This PR adds the
object count back in the UI alongside with segment count to help address
the issue.
Change-Id: I92eb42c769d350eba68a72443deffec5c278359c
table and drop not null constraint on objects column
Since, we want to move from charging our customers by object count to
segment count, this PR prepares the database to be able to record segments count
instead of objects count for satellite's billing system
Change-Id: Ie91ef354e78d24a268bc1cdc4327c182f733321e
listPendingTransitionShim is a temporary transition shim intended to
make existing API processes keep working when a future DB schema change
is executed. For more explanation, see the message on commit c053bdbd70.
However, the shim has a small bug: it is missing the ORDER BY clause
that appears in the original ListPending method. This transition shim
code won't ever run until we make the DB schema change, so this bug
hasn't hurt anything yet; it's just important that we fix it before the
DB schema change happens.
Change-Id: I5953651583ee236500c2c07141dfc9d690a95118
This update is to set up users being able to register with a promo code added to their account in place of the free tier coupon.
Change-Id: I7badf87937b12664f145520b6dcc4b26fe750407
We don't need column uses_segment_transfer_queue in graceful_exit_progress
as now all exiting nodes are using graceful_exit_segment_transfer_queue and
table graceful_exit_transfer_queue has been dropped.
Change-Id: I4b7c087433f04138cf09bcf8ad3d8de2c185502a
Multipart upload limits added. Last part has no size limit.
Max number of parts: 10000, min part size: 5 MiB
Change-Id: Ic2262ce25f989b34d92f662bde720d4c4d0dc93d
Union query for both reputable and new nodes didn't properly work. The
top level query is required to have an `AS OF SYSTEM TIME` statement as
well.
Change-Id: I8ee6dd5b700c2b1ed2aa562962bfa72be7eec30a
Uplink needs only part of columns we are reading from DB.
To improve performance we should read only those that are
realy needed.
Change-Id: Ib39259318169c46afe5fa4c6ce2184da82e960c8
We don't use this column for anything. If you want to know if a node is
contained, you can check the pending_audits table.
Change-Id: I5671722a5fc6e1749d3a49e187a56556000ff941
We don't use this column for anything. If you want to know if a node is
contained, you can check the pending_audits table.
Change-Id: I8da1d8e01a2dcaff63c5067a7927b5451424ad04
Removes database tables and functionality related to our custom
coupon implementation because it has been superseded by the Stripe
coupon and promo code system. Requires implementations of the
payments Invoices interface to return coupon usages along with
invoices.
Change-Id: Iac52d2ff64afca8cc4dbb2d1f20e6ad4b39ddfde
Cockroach doesn't like concurrent database creation, add a limiter to
avoid overloading the database.
Also limit the cockroach testing to the last 10 migrations.
Change-Id: Ie73c516ac2d362b251f605049e51eb25888432bd
Populate the egress_dead column for taking into account allocated bandwidth that can be removed because orders have been sent by the storage nodes. The bandwidth not used in these orders can be allocated again.
Change-Id: I78c333a03945cd7330aec052edd3562ec671118e
Drop table graceful_exit_transfer_queue which is not used anymore (replaced by graceful_exit_segment_transfer_queue).
Change-Id: Ie254fe9a54fb0784e350a439ce7a9bc99a3a58b5
Migration tests are very heavy on database schema changes, which may
cause delays and retries. Separate out the migration tests and ensure
that they do not run concurrently on the same database.
Change-Id: I35b17525f18fd923546ce1fcc12d805c95073b6b
Why: big.Float is not an ideal type for dealing with monetary amounts,
because no matter how high the precision, some non-integer decimal
values can not be represented exactly in base-2 floating point. Also,
storing gob-encoded big.Float values in the database makes it very hard
to use those values in meaningful queries, making it difficult to do
any sort of analysis on billing.
Now that we have amounts represented using monetary.Amount, we can
simply store them in the database using integers (as given by the
.BaseUnits() method on monetary.Amount).
We should move toward storing the currency along with any monetary
amount, wherever we are storing amounts, because satellites might want
to deal with currencies other than STORJ and USD. Even better, it
becomes much clearer what currency each monetary value is _supposed_ to
be in (I had to dig through code to find that out for our current
monetary columns).
Deployment
----------
Getting rid of the big.Float columns will take multiple deployment
steps. There does not seem to be any way to make the change in a way
that lets existing queries continue to work on CockroachDB (it could be
done with rules and triggers and a stored procedure that knows how to
gob-decode big.Float objects, but CockroachDB doesn't have rules _or_
triggers _or_ stored procedures). Instead, in this first step, we make
no changes to the database schema, but add code that knows how to deal
with the planned changes to the schema when they are made in a future
"step 2" deployment. All functions that deal with the
coinbase_transactions table have been taught to recognize the "undefined
column" error, and when it is seen, to call a separate "transition shim"
function to accomplish the task. Once all the services are running this
code, and the step 2 deployment makes breaking changes to the schema,
any services that are still running and connected to the database will
keep working correctly because of the fallback code included here. The
step 2 deployment can be made without these transition shims included,
because it will apply the database schema changes before any of its code
runs.
Step 1:
No schema changes; just include code that recognizes the
"undefined column" error when dealing with the
coinbase_transactions or stripecoinpayments_tx_conversion_rates
tables, and if found, assumes that the column changes from Step
2 have already been made.
Step 2:
In coinbase_transactions:
* change the names of the 'amount' and 'received' columns to
'amount_gob' and 'received_gob' respectively
* add new 'amount_numeric' and 'received_numeric' columns with
INT8 type.
In stripecoinpayments_tx_conversion_rates:
* change the name of the 'rate' column to 'rate_gob'
* add new 'rate_numeric' column with NUMERIC(8, 8) type
Code reading from either of these tables must query both the X_gob
and X_numeric columns. If X_numeric is not null, its value should
be used; otherwise, the gob-encoded big.Float in X_gob should be
used. A chore might be included in this step that transitions values
from X_gob to X_numeric a few rows at a time.
Step 3:
Once all prod satellites have no values left in the _gob columns, we
can drop those columns and add NOT NULL constraints to the _numeric
columns.
Change-Id: Id6db304b404e6fde44f5a8c23cdaeeaaa2324f20
Why: big.Float is not an ideal type for dealing with monetary amounts,
because no matter how high the precision, some non-integer decimal
values can not be represented exactly in base-2 floating point. Also,
storing gob-encoded big.Float values in the database makes it very hard
to use those values in meaningful queries, making it difficult to do
any sort of analysis on billing.
For better accuracy, then, we can just represent monetary values as
integers (in whatever base units are appropriate for the currency). For
example, STORJ tokens or Bitcoins can not be split into pieces smaller
than 10^-8, so we can store amounts of STORJ or BTC with precision
simply by moving the decimal point 8 digits to the right. For USD values
(assuming we don't want to deal with fractional cents), we can move the
decimal point 2 digits to the right.
To make it easier and less error-prone to deal with the math involved, I
introduce here a new type, monetary.Amount, instances of which have an
associated value _and_ a currency.
Change-Id: I03395d52f0e2473cf301361f6033722b54640265
This PR utilize the new burst limit column from projects table to allow
control on the limit for request per seconds and token bucket size
When no burst limit is explicitly set, rate limit is applied to both so
we don't limit how quickly request can be made in a second.
Change-Id: I883235c60c5d6416aeadd1c80ed2ebd193aa4d9f
In order to limit the amount of overall requests a user can issue in a
time span, we need to have the ability to define such limit separate
from per second request rate.
This PR adds a new column on the projects table to store the burst limit
per project.
Change-Id: I7efc2ccdda4579252347cc6878cf846b85146dc7
Remove the logic associated to the old transfer queue.
A new transfer queue (gracefulexit_segment_transfer_queue) has been created for migration to segmentLoop.
Transfers from the old queue were not moved to the new queue.
Instead, it was still used for nodes which have initiated graceful exit before migration.
There is no such node left, so we can remove all this logic.
In a next step, we will drop the table.
Change-Id: I3aa9bc29b76065d34b57a73f6e9c9d0297587f54
nodes and audit_history tables
This PR removes all code reference to audit_histories table and
```
audit_reputation_alpha, audit_reputation_beta,
unknown_audit_reputation_alpha, unknown_audit_reputation_beta,
```
columns from nodes table.
It also drops audit_histories table from the db since the code
that's referencing it currently are not being used.
Change-Id: Ifcda8db36afb3a333d487ff831f2fdefc8b02a4c
gracefulexit
With the effort to move audit related data into reputation store, this
PR updates gracefulexit endpoint to use reputation service to get a
node's audit score
Change-Id: Iad93ea689ad67ff9c57c7be16687e21e715fab7a
Currently, reputation table is only populated when a node has been
audited. This is ok in production, however a lot of our tests doesn't
upload any data or trigger audits.
This PR adds an initialization step in testplanet to populate reputation
table with zero value for nodes reputation.
Change-Id: I11b381236669db346dc68a48a6d4a27334a0a8b8
package in audit
This PR implements reputation store and replace overlay in audit service
to use such store for storing node's audit stats.
In order to keep the changeset smaller, most of the changes in this PR is for copying audit logic in overlay to
reputation package. In a following PR, the duplicating code will be
removed from overlay.
Change-Id: I16c12494a0970f44c422b26cf603c1dc489e5bc1
Added MFA passcode and recovery code field for token requests.
Added endpoints for MFA-related activity: enabling MFA,
disabling MFA, generating a new MFA secret key, and
generating new MFA recovery codes.
Change-Id: Ia1443f05d3a2fecaa7f170f56d73c7a4e9b69ad5
For being able to use the segment metainfo loop, graceful exit transfers have to include the segment stream_id/position instead of the path. For this, we created a new table graceful_exit_segment_transfer_queue that will replace the graceful_exit_transfer_queue. The table has been created in a previous migration and made accessible through graceful exit db in another one.
This changes makes graceful exit enqueue transfer items for new exiting nodes in the new table.
Change-Id: I7bd00de13e749be521d63ef3b80c168df66b9433
For being able to use the segment metainfo loop, graceful exit transfers have to include the segment stream_id/position instead of the path. For this, we created a new table graceful_exit_segment_transfer_queue that will replace the graceful_exit_transfer_queue. The table has been created in a previous migration.
This change gives access to this table.
Graceful Exit doesn't use the table yet, this will be done in a next change.
Change-Id: I6c09cff4cc45f0529813a8898ddb2d14aadb2cb8
Bucket tally calculation will be removed from metaloop and will
use metabase objects iterator directly.
At the moment only bucket tally needs objects so it make no sense
to implement separate objects loop.
Change-Id: Iee60059fc8b9a1bf64d01cafe9659b69b0e27eb1
Columns for MFA status, secret key, and JSON-encoded array of
recovery codes are added to the users table.
Change-Id: Ifed7e50ec9767c1670d9682df1575678984daa60
When a user adds a credit card, switch them to the paid tier and update
their projects with new bandwidth/storage limits. New projects for the
paid tier user will also have the updated limits.
The new limits are:
* storage per project - 50 GB free/25 TB paid
* bandwidth per project - 50 GB free/100 TB paid
Change-Id: I7d6467d077e8bb2bbe4bcf88ab8d75490f83165e
The previous query was making a full table scan. This modifies code to
do the queries separately on each action. It will probably be slower on
a small table, however there should be a several magnitude boost on
large tables.
Change-Id: Ib8885024d8a5a0102bbab4ce09bd6af9047930c9
We want to calculate bucket tally only from iterating objects.
Object currently has an info about totals for bytes and segments.
We need to adjust tallies to keep those totals. Older entries will
be untouched and code will use totals only if available. Change
is adding columns for totals to bucket_storage_tally table and
is adding general handling for them.
Next step is to start using total columns instead of inline/remote.
This will be done with next change.
Change-Id: I37fed1b327789efcf1d0570318aee3045db17fad
We want to use StreamID/Position to identify injured
segment. As it is hard to alter existing injuredsegments
table we are adding a new table that will replace existing
one. Old table will be dropped later.
Change-Id: I0d3b06522645013178b6678c19378ebafe485c49
So that we can easily see whether a user is in the paid tier without
querying for payment methods.
Change-Id: I122566ddd0953203f852741fa12c71795bc1ec5c
Period end was calculated
incorrectly as it was still in current month but
should be the first day of next month.
Change-Id: I37451d29a9b901b69e6c3c401b333c58b3376d61
Adding AS OF SYSTEM TIME to query that is calculating project bandiwdth.
As an addition method for setting interval is added as test doesn't
work well with default interval.
Change-Id: Id1e15be4f6afff13b9dc2b7f595e2edb6de28db9
Currently, pending audit is finding segment by segment location
(path) because we want to move audit to segmentloop and we will
have only StreamID and Position we need to add columns for those
fields. Altering existing table can cause issues while
migration and deployment. Cleaner choise is to make new table.
This change contains migration with new segment_pending_audit
table that will replace pending_audits table and adjustments
to use new table in the code.
Table pending_audits will be dropped with next release.
Change-Id: Id507e29c152da594bac1fd812c78d7ecf45ec51f
table graceful_exit_segment_transfer_queue will be used to replace graceful_exit_transfer_queue. Currently, it uses the path of a segment to keep track of pieces to be transferred. As we want to use the segment metainfo loop, we will need to record stream_id and position of the segment instead of relying on object path.
This change also add a uses_segment_transfer_queue column to the graceful_exit_progress table to be able to know if a transfer has been initiated while using the old table.
Change-Id: Iafb1e8e65ba124e20de4a9ff76da181c3222de7e
Define service and DB interface for storing node reputation data
and updating the overlay cache.
Add overlay service and DB method UpdateReputation.
See https://github.com/storj/storj/pull/4144
Change-Id: Iedd8bd3274457d26c595919303d55327c1464b8c
The reputation table duplicates the reputation information in the
nodes table. It will be used for implementing the reputation
service.
Change-Id: I36c0318e8fa5f535e9d527df95b22a4f9eb365d4
This is part of metaloop refactoring. We plan to remove
irreparable at some point but there was not time for it.
Now instead refatoring it for segmentloop its just easier
to drop it.
Later we still need to drop table with migration step.
Change-Id: I270e77f119273d39a1ecdcf5e1c37a5662a29ab4
The redis key associated to bandwidth usage depended on the current month.
This change makes it depends also on the day, so that we update bandwidth usage
daily to take into account changes associated to expired but not used allocated bandwidth.
It also add a test to to check that the allocated but not settled bandwidth is not counted after 2 days.
Change-Id: Iee9dbe3517cc3b9825438360b276a07a43dfbc64
Migration step for adding a 'egress_dead' column to the project_bandwidth_daily_rollups.
It will be used to track bandwidth allocation that won't be consumned
as the corresponding order has already been processed and has a settled
bandwidth amount lower than the order limit (allocated bandwidth).
Change-Id: Ic07592e69292ae2076e69f6038bb0e0fae79b271
Full prefix: web/satellite, satellite/{console, analytics, satellitedb}
- checkbox added to register view - business tab
- user being saved with new column
- add sales contact choice to Segment calls
- ui fix added to employee count dropdown
Change-Id: Ib976872463b88874ea9714db635d58c79cdbe3a1
We are not using this table so make no sense to put data there.
This change removes only code that is using this table. Before next
release we need to drop table with migration step.
Change-Id: I80f400aa778c717e70324bd00da502b7032c9d9f
Until now, whenever audits were recorded we would try to delete
the node from containment just in case it exists. Since we now
want to treat segment repair downloads as audits, this would
erroneously remove nodes from containment, as repair does not go
through a Reverify step. With this changeset, (Batch)UpdateStats
will not remove nodes from containment. The Reverify method will
remove all necessary nodes from containment.
Change-Id: Iabc9496293076dccba32ddfa028e92580b26167f
Because of recent changes to how coupons for the free tier are handled
(see commit 4c0817bcfb), we no longer want all these $10
non-expiring coupons. After coupons are applied during invoice
generation, if a customer does not have any valid (non expired, non
consumed) coupons, a new promotional coupon is applied.
We could just wait for users to consume all $10 of the non-expiring
coupons, and the new promotional coupon would be applied for the
following billing cycle, but this gets tricky, because if in the final
month, the user is billed for $1 of usage, but only $0.5 of the $10
non-expiring coupon is remaining, the user will be charged for the
remaining $0.5. With the new promotional coupon of $1.65, expiring every
month, this would not be an issue.
So long story short, this commit migrates all non-expiring coupons to
expire within 2 billing periods (all existing non-expiring coupons in
prod were created in early April or later). That way, there is still
enough value in the $10 coupon that we don't have to worry about
customers exceeding it, and the coupons will expire. Then we'll
immediately apply the $1.65 coupon for the next month! And then
hopefully this unfortunate situation will come to a pleasant end.
Change-Id: I8a593948d8876c41a71d886b9a95d4e2c802b4f3
Replace GetProjectAllocatedBandwidth by GetProjectBandwidth which calculates
used bandwidth from allocated and settled bandwidth recorded in the
project_bandwidth_daily_rollups table.
For each day in the month, if the allocated bandwidth is expired, it uses the
settled bandwidth for computing used bandwidth.
Change-Id: Ife723c6d5275338f470619631acb25930d39ac3c
project_bandwidth_daily_rollups table
We want to calculate used bandwidth better so we need to calculate it
from allocated and settled bandwidth. To do this we need first populate
this new table.
https://storjlabs.atlassian.net/browse/PG-56
Change-Id: I308b737bf08ee48ce4e46a3605697ab2095f7257
Improve UpdateCheckIn on a contended row:
name old time/op new time/op delta
UpdateCheckInContended-100x-32 2.29s ±55% 0.17s ±61% -92.45% (p=0.008 n=5+5)
Change-Id: I053ab9f1cff136c306e5fb57f5e355cdc0269a8c
Currently the interface is not useful. When we need to vary the
implementation for testing purposes we can introduce a local interface
for the service/chore that needs it, rather than using the large api.
Unfortunately, this requires adding a cleanup callback for tests, there
might be a better solution to this problem.
Change-Id: I079fe4dbe297b0ae08c10081a1cea4dfbc277682
Initially metabase was developed separately and it was useful to have a
separate environment flag for tests, however, it's more convenient to
use the same as rest of the testsuite.
Change-Id: Ia4d79be27ce5911cbae68d57cdf0b30f63459444
Currently we were duplicating code for AS OF SYSTEM TIME in several
places. This replaces the code with using a method on
dbutil.Implementation.
As a consequence it's more useful to use a shorter name for
implementation - 'impl' should be sufficiently clear in the context.
Similarly, using AsOfSystemInterval and AsOfSystemTime to distinguish
between the two modes is useful and slightly shorter without causing
confusion.
Change-Id: Idefe55528efa758b6176591017b6572a8d443e3d
Use the 'AS OF SYSTEM TIME' Cockroach DB clause for the Graceful Exit
(a.k.a GE) queries that count the delete the GE queue items of nodes
which have already exited the network.
Split the subquery used for deleting all the transfer queue items of
nodes which has exited when CRDB is used and batch the queries because
CRDB struggles when executing in a single query unlike Postgres.
The new test which has been added to this commit to verify the CRDB
batch logic for deleting all the transfer queue items of the exited
nodes has raised that the Enqueue method has to run in baches when CRDB
is used otherwise CRDB has return the error "driver: bad connection"
when a big a amount of items are passed to be enqueued. This error
didn't happen with the current test implementation it was with an
initial one that it was creating a big amount of exited nodes and
transfer queue items for those nodes.
Change-Id: I6a099cdbc515a240596bc93141fea3182c2e50a9
multiregion satellites have complex database connection strings
largely due to using a different backend for the repair queue than
cockroach.
billing stuff didn't work right with this.
Change-Id: Ie8759a8c47e71347c3a190abfc9d53945d7b8855
When audits are being recorded, we automatically add some SQL to remove
the node from the pending audits table in case it exists. They are
removed from pending audits even if the node was offline for the audit.
This is not the correct behavior.
Add statement to record audit results in reverify tests to ensure no
more false positives.
Change-Id: I186ae68bc5e7962ef6c5defbebc1d95e63596a17
The DBs of our production satellites have some indexes that we didn't
have in the migrations because at that time we weren't able to add them
because our migration test was not able to deal with Cockroach indexes
with the STORING clause.
We have recently modified the storj.io/private/dbutil/pgutil package to
support the CRDB STRORING clause, so we are adding the missing indexes
to our migrations for being able to have them if we have to recover a DB
from scratch or we deploy a new DB satellite.
Change-Id: I686ff84e5b4c02d9615f50fa531261363affefb8
errs.Class should not contain "error" in the name, since that causes a
lot of stutter in the error logs. As an example a log line could end up
looking like:
ERROR node stats service error: satellitedbs error: node stats database error: no rows
Whereas something like:
ERROR nodestats service: satellitedbs: nodestatsdb: no rows
Would contain all the necessary information without the stutter.
Change-Id: I7b7cb7e592ebab4bcfadc1eef11122584d2b20e0
The previously configured never-expiring coupon does not refill every
month. Eventually, even though it never expires, it will run out. This
commit makes several small changes to address this issue for the free
tier:
* Change the config for the promotional coupon to be $1.65 for 1 month
(the change from $10 to $1.65 is due to our recent pricing changes)
* Update PopulatePromotionalCoupons (PPC for brevity) to add promotional
coupons to users with expired and consumed coupons (all users with a
project and no active coupons should get a new coupon when PPC is called)
* Call PPC at the end of the `create-invoice-coupons` stage of invoice
generation - after current coupons are processed and expired/exhausted.
* Remove legacy admin functionality for PPC from satellite/console - we
do not currently use it, but if we did, it should be in satellite/admin
instead.
Change-Id: I77727b97bef972df32ebb23cdc05055827076e2a
For business accounts we need to track the sales contact.
It will be a question to business accounts during onboarding.
Change-Id: I8d101ce1b52091478dfb0ddd875e1cc717d765d3
Initially there were pkg and private packages, however for all practical
purposes there's no significant difference between them. It's clearer to
have a single private package - and when we do get a specific
abstraction that needs to be reused, we can move it to storj.io/common
or storj.io/private.
Change-Id: Ibc2036e67f312f5d63cb4a97f5a92e38ae413aa5
cache is really common variable and type name and we have already used
the package name alias in multiple places.
Change-Id: I6435785b7549b541d533de59ec94557b9bd11e04
Initially we duplicated the code to avoid large scale changes to
the packages. Now we are past metainfo refactor we can remove the
duplication.
Change-Id: I9d0b2756cc6e2a2f4d576afa408a15273a7e1cef
When nodes check in for the very first time, if the satellite can't ping
them back, they are inserted into the nodes table with
last_contact_success of '0001-01-01 00:00:00+00'. If the stray nodes
chore runs before the node can fix their problem, they are DQd.
Solution: when DQing stray nodes, dont DQ where last_contact_success =
'0001-01-01 00:00:00+00'::timestamptz
Change-Id: I477a02d5ef85b2c930ed6b7d99a4d1995169bca8
metabase has become a central concept and it's more suitable for it to
be directly nested under satellite rather than being part of metainfo.
metainfo is going to be the "endpoint" logic for handling requests.
Change-Id: I53770d6761ac1e9a1283b5aa68f471b21e784198
The new default promotional coupon is $10/month, and doesn't expire.
This change also migrates the coupon.duration column over to the new
coupon.billing_periods, and switches to rely completely on
billing_periods.
Change-Id: Ic3341e9fa4040449bab5e66ca4ee2640b095cf3d
Currently we can have an error about duplicated entry while inserting into value_attributions table. This change is changing simple insert into insert that is doing nothing on conflict.
Change-Id: I3efd8dc0b63115e8e2ed8f4196ccf969ee942295